2025-05-04 00:00:10.250540 | Job console starting... 2025-05-04 00:00:10.262941 | Updating repositories 2025-05-04 00:00:10.355122 | Preparing job workspace 2025-05-04 00:00:11.931534 | Running Ansible setup... 2025-05-04 00:00:17.976748 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-04 00:00:18.912166 | 2025-05-04 00:00:18.912295 | PLAY [Base pre] 2025-05-04 00:00:18.956352 | 2025-05-04 00:00:18.956503 | TASK [Setup log path fact] 2025-05-04 00:00:18.998946 | orchestrator | ok 2025-05-04 00:00:19.041220 | 2025-05-04 00:00:19.041349 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-04 00:00:19.123522 | orchestrator | ok 2025-05-04 00:00:19.149870 | 2025-05-04 00:00:19.149981 | TASK [emit-job-header : Print job information] 2025-05-04 00:00:19.261234 | # Job Information 2025-05-04 00:00:19.261468 | Ansible Version: 2.15.3 2025-05-04 00:00:19.261505 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-04 00:00:19.261534 | Pipeline: periodic-midnight 2025-05-04 00:00:19.261555 | Executor: 7d211f194f6a 2025-05-04 00:00:19.261574 | Triggered by: https://github.com/osism/testbed 2025-05-04 00:00:19.261592 | Event ID: ccb8b5d971f94885aa5ddccaf643e19c 2025-05-04 00:00:19.270920 | 2025-05-04 00:00:19.271022 | LOOP [emit-job-header : Print node information] 2025-05-04 00:00:19.596514 | orchestrator | ok: 2025-05-04 00:00:19.596760 | orchestrator | # Node Information 2025-05-04 00:00:19.596798 | orchestrator | Inventory Hostname: orchestrator 2025-05-04 00:00:19.596820 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-04 00:00:19.596839 | orchestrator | Username: zuul-testbed02 2025-05-04 00:00:19.596857 | orchestrator | Distro: Debian 12.10 2025-05-04 00:00:19.596876 | orchestrator | Provider: static-testbed 2025-05-04 00:00:19.596893 | orchestrator | Label: testbed-orchestrator 2025-05-04 00:00:19.596910 | orchestrator | Product Name: OpenStack Nova 2025-05-04 00:00:19.596926 | orchestrator | Interface IP: 81.163.193.140 2025-05-04 00:00:19.619310 | 2025-05-04 00:00:19.619408 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-04 00:00:20.569559 | orchestrator -> localhost | changed 2025-05-04 00:00:20.581584 | 2025-05-04 00:00:20.581690 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-04 00:00:22.676759 | orchestrator -> localhost | changed 2025-05-04 00:00:22.693407 | 2025-05-04 00:00:22.693504 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-04 00:00:23.363540 | orchestrator -> localhost | ok 2025-05-04 00:00:23.372135 | 2025-05-04 00:00:23.372235 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-04 00:00:23.425791 | orchestrator | ok 2025-05-04 00:00:23.463021 | orchestrator | included: /var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-04 00:00:23.477677 | 2025-05-04 00:00:23.477766 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-04 00:00:24.411364 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-04 00:00:24.411539 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/work/dced2a157bc241aea55b02a5a6515176_id_rsa 2025-05-04 00:00:24.411569 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/work/dced2a157bc241aea55b02a5a6515176_id_rsa.pub 2025-05-04 00:00:24.411589 | orchestrator -> localhost | The key fingerprint is: 2025-05-04 00:00:24.411608 | orchestrator -> localhost | SHA256:SQ3kQ1rcD2IyUu6xweZvaLVqhP1eetROsb2q8z2WMSU zuul-build-sshkey 2025-05-04 00:00:24.411626 | orchestrator -> localhost | The key's randomart image is: 2025-05-04 00:00:24.411642 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-04 00:00:24.411681 | orchestrator -> localhost | | ..o=. | 2025-05-04 00:00:24.411699 | orchestrator -> localhost | | .oo=+oo | 2025-05-04 00:00:24.411722 | orchestrator -> localhost | | .B++..o | 2025-05-04 00:00:24.411740 | orchestrator -> localhost | | + = o o E .| 2025-05-04 00:00:24.411756 | orchestrator -> localhost | | o+ S . + o | 2025-05-04 00:00:24.411771 | orchestrator -> localhost | | . o+ .. + + | 2025-05-04 00:00:24.411794 | orchestrator -> localhost | | .o.+..o = | 2025-05-04 00:00:24.411810 | orchestrator -> localhost | | ..o.oo ..= | 2025-05-04 00:00:24.411826 | orchestrator -> localhost | | ...o..+oo.. | 2025-05-04 00:00:24.411842 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-04 00:00:24.411884 | orchestrator -> localhost | ok: Runtime: 0:00:00.230131 2025-05-04 00:00:24.419138 | 2025-05-04 00:00:24.419226 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-04 00:00:24.475813 | orchestrator | ok 2025-05-04 00:00:24.500526 | orchestrator | included: /var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-04 00:00:24.533965 | 2025-05-04 00:00:24.534062 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-04 00:00:24.557474 | orchestrator | skipping: Conditional result was False 2025-05-04 00:00:24.564486 | 2025-05-04 00:00:24.564572 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-04 00:00:25.234782 | orchestrator | changed 2025-05-04 00:00:25.247852 | 2025-05-04 00:00:25.247943 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-04 00:00:25.515946 | orchestrator | ok 2025-05-04 00:00:25.522412 | 2025-05-04 00:00:25.522497 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-04 00:00:26.001064 | orchestrator | ok 2025-05-04 00:00:26.151310 | 2025-05-04 00:00:26.152019 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-04 00:00:26.679560 | orchestrator | ok 2025-05-04 00:00:26.690955 | 2025-05-04 00:00:26.691052 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-04 00:00:26.775348 | orchestrator | skipping: Conditional result was False 2025-05-04 00:00:26.782255 | 2025-05-04 00:00:26.782343 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-04 00:00:27.774293 | orchestrator -> localhost | changed 2025-05-04 00:00:27.801093 | 2025-05-04 00:00:27.801195 | TASK [add-build-sshkey : Add back temp key] 2025-05-04 00:00:28.353389 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/work/dced2a157bc241aea55b02a5a6515176_id_rsa (zuul-build-sshkey) 2025-05-04 00:00:28.353562 | orchestrator -> localhost | ok: Runtime: 0:00:00.018955 2025-05-04 00:00:28.360779 | 2025-05-04 00:00:28.360869 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-04 00:00:28.755405 | orchestrator | ok 2025-05-04 00:00:28.762918 | 2025-05-04 00:00:28.763000 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-04 00:00:28.796401 | orchestrator | skipping: Conditional result was False 2025-05-04 00:00:28.812698 | 2025-05-04 00:00:28.812787 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-04 00:00:29.244569 | orchestrator | ok 2025-05-04 00:00:29.285594 | 2025-05-04 00:00:29.287285 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-04 00:00:29.374212 | orchestrator | ok 2025-05-04 00:00:29.400040 | 2025-05-04 00:00:29.400151 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-04 00:00:29.948072 | orchestrator -> localhost | ok 2025-05-04 00:00:29.956813 | 2025-05-04 00:00:29.956912 | TASK [validate-host : Collect information about the host] 2025-05-04 00:00:31.212030 | orchestrator | ok 2025-05-04 00:00:31.231773 | 2025-05-04 00:00:31.231865 | TASK [validate-host : Sanitize hostname] 2025-05-04 00:00:31.304485 | orchestrator | ok 2025-05-04 00:00:31.314784 | 2025-05-04 00:00:31.314887 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-04 00:00:31.982608 | orchestrator -> localhost | changed 2025-05-04 00:00:31.990464 | 2025-05-04 00:00:31.990569 | TASK [validate-host : Collect information about zuul worker] 2025-05-04 00:00:32.494461 | orchestrator | ok 2025-05-04 00:00:32.499950 | 2025-05-04 00:00:32.500039 | TASK [validate-host : Write out all zuul information for each host] 2025-05-04 00:00:33.301035 | orchestrator -> localhost | changed 2025-05-04 00:00:33.312120 | 2025-05-04 00:00:33.312215 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-04 00:00:33.613359 | orchestrator | ok 2025-05-04 00:00:33.619636 | 2025-05-04 00:00:33.619754 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-04 00:00:51.542147 | orchestrator | changed: 2025-05-04 00:00:51.542370 | orchestrator | .d..t...... src/ 2025-05-04 00:00:51.542408 | orchestrator | .d..t...... src/github.com/ 2025-05-04 00:00:51.542434 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-04 00:00:51.542454 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-04 00:00:51.542474 | orchestrator | RedHat.yml 2025-05-04 00:00:51.557713 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-04 00:00:51.557731 | orchestrator | RedHat.yml 2025-05-04 00:00:51.557783 | orchestrator | = 2.2.0"... 2025-05-04 00:01:04.821354 | orchestrator | 00:01:04.821 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-04 00:01:04.920205 | orchestrator | 00:01:04.919 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-04 00:01:06.227359 | orchestrator | 00:01:06.227 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-04 00:01:07.103000 | orchestrator | 00:01:07.102 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-04 00:01:08.574903 | orchestrator | 00:01:08.574 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-04 00:01:09.807605 | orchestrator | 00:01:09.807 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-04 00:01:11.096817 | orchestrator | 00:01:11.096 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-04 00:01:12.304193 | orchestrator | 00:01:12.303 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-04 00:01:12.304314 | orchestrator | 00:01:12.304 STDOUT terraform: Providers are signed by their developers. 2025-05-04 00:01:12.304340 | orchestrator | 00:01:12.304 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-04 00:01:12.304429 | orchestrator | 00:01:12.304 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-04 00:01:12.304449 | orchestrator | 00:01:12.304 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-04 00:01:12.304470 | orchestrator | 00:01:12.304 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-04 00:01:12.305114 | orchestrator | 00:01:12.304 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-04 00:01:12.305139 | orchestrator | 00:01:12.304 STDOUT terraform: you run "tofu init" in the future. 2025-05-04 00:01:12.305161 | orchestrator | 00:01:12.304 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-04 00:01:12.305179 | orchestrator | 00:01:12.305 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-04 00:01:12.305271 | orchestrator | 00:01:12.305 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-04 00:01:12.305304 | orchestrator | 00:01:12.305 STDOUT terraform: should now work. 2025-05-04 00:01:12.305335 | orchestrator | 00:01:12.305 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-04 00:01:12.305354 | orchestrator | 00:01:12.305 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-04 00:01:12.305418 | orchestrator | 00:01:12.305 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-04 00:01:12.491085 | orchestrator | 00:01:12.490 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-04 00:01:12.685822 | orchestrator | 00:01:12.685 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-04 00:01:12.919738 | orchestrator | 00:01:12.685 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-04 00:01:12.919868 | orchestrator | 00:01:12.685 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-04 00:01:12.919888 | orchestrator | 00:01:12.685 STDOUT terraform: for this configuration. 2025-05-04 00:01:12.919933 | orchestrator | 00:01:12.919 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-04 00:01:13.034961 | orchestrator | 00:01:13.034 STDOUT terraform: ci.auto.tfvars 2025-05-04 00:01:13.292956 | orchestrator | 00:01:13.292 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-04 00:01:14.276904 | orchestrator | 00:01:14.276 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-04 00:01:14.805997 | orchestrator | 00:01:14.805 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-04 00:01:15.052919 | orchestrator | 00:01:15.052 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-04 00:01:15.053008 | orchestrator | 00:01:15.052 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-04 00:01:15.053026 | orchestrator | 00:01:15.052 STDOUT terraform:  + create 2025-05-04 00:01:15.053044 | orchestrator | 00:01:15.052 STDOUT terraform:  <= read (data resources) 2025-05-04 00:01:15.053238 | orchestrator | 00:01:15.052 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-04 00:01:15.053250 | orchestrator | 00:01:15.053 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-04 00:01:15.053297 | orchestrator | 00:01:15.053 STDOUT terraform:  # (config refers to values not yet known) 2025-05-04 00:01:15.053313 | orchestrator | 00:01:15.053 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-04 00:01:15.053340 | orchestrator | 00:01:15.053 STDOUT terraform:  + checksum = (known after apply) 2025-05-04 00:01:15.053379 | orchestrator | 00:01:15.053 STDOUT terraform:  + created_at = (known after apply) 2025-05-04 00:01:15.053419 | orchestrator | 00:01:15.053 STDOUT terraform:  + file = (known after apply) 2025-05-04 00:01:15.053460 | orchestrator | 00:01:15.053 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.053500 | orchestrator | 00:01:15.053 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.053539 | orchestrator | 00:01:15.053 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-04 00:01:15.053580 | orchestrator | 00:01:15.053 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-04 00:01:15.053610 | orchestrator | 00:01:15.053 STDOUT terraform:  + most_recent = true 2025-05-04 00:01:15.053657 | orchestrator | 00:01:15.053 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.053681 | orchestrator | 00:01:15.053 STDOUT terraform:  + protected = (known after apply) 2025-05-04 00:01:15.053729 | orchestrator | 00:01:15.053 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.053757 | orchestrator | 00:01:15.053 STDOUT terraform:  + schema = (known after apply) 2025-05-04 00:01:15.053795 | orchestrator | 00:01:15.053 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-04 00:01:15.053837 | orchestrator | 00:01:15.053 STDOUT terraform:  + tags = (known after apply) 2025-05-04 00:01:15.053886 | orchestrator | 00:01:15.053 STDOUT terraform:  + updated_at = (known after apply) 2025-05-04 00:01:15.053915 | orchestrator | 00:01:15.053 STDOUT terraform:  } 2025-05-04 00:01:15.054155 | orchestrator | 00:01:15.054 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-04 00:01:15.054183 | orchestrator | 00:01:15.054 STDOUT terraform:  # (config refers to values not yet known) 2025-05-04 00:01:15.054228 | orchestrator | 00:01:15.054 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-04 00:01:15.054268 | orchestrator | 00:01:15.054 STDOUT terraform:  + checksum = (known after apply) 2025-05-04 00:01:15.054301 | orchestrator | 00:01:15.054 STDOUT terraform:  + created_at = (known after apply) 2025-05-04 00:01:15.054341 | orchestrator | 00:01:15.054 STDOUT terraform:  + file = (known after apply) 2025-05-04 00:01:15.054378 | orchestrator | 00:01:15.054 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.054415 | orchestrator | 00:01:15.054 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.054451 | orchestrator | 00:01:15.054 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-04 00:01:15.054488 | orchestrator | 00:01:15.054 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-04 00:01:15.054514 | orchestrator | 00:01:15.054 STDOUT terraform:  + most_recent = true 2025-05-04 00:01:15.054560 | orchestrator | 00:01:15.054 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.054588 | orchestrator | 00:01:15.054 STDOUT terraform:  + protected = (known after apply) 2025-05-04 00:01:15.054625 | orchestrator | 00:01:15.054 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.054662 | orchestrator | 00:01:15.054 STDOUT terraform:  + schema = (known after apply) 2025-05-04 00:01:15.054699 | orchestrator | 00:01:15.054 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-04 00:01:15.054735 | orchestrator | 00:01:15.054 STDOUT terraform:  + tags = (known after apply) 2025-05-04 00:01:15.054772 | orchestrator | 00:01:15.054 STDOUT terraform:  + updated_at = (known after apply) 2025-05-04 00:01:15.054791 | orchestrator | 00:01:15.054 STDOUT terraform:  } 2025-05-04 00:01:15.054839 | orchestrator | 00:01:15.054 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-04 00:01:15.054875 | orchestrator | 00:01:15.054 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-04 00:01:15.054921 | orchestrator | 00:01:15.054 STDOUT terraform:  + content = (known after apply) 2025-05-04 00:01:15.054974 | orchestrator | 00:01:15.054 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-04 00:01:15.055100 | orchestrator | 00:01:15.054 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-04 00:01:15.055144 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-04 00:01:15.055154 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-04 00:01:15.055162 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-04 00:01:15.055197 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-04 00:01:15.055220 | orchestrator | 00:01:15.055 STDOUT terraform:  + directory_permission = "0777" 2025-05-04 00:01:15.055253 | orchestrator | 00:01:15.055 STDOUT terraform:  + file_permission = "0644" 2025-05-04 00:01:15.055300 | orchestrator | 00:01:15.055 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-04 00:01:15.055349 | orchestrator | 00:01:15.055 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.055366 | orchestrator | 00:01:15.055 STDOUT terraform:  } 2025-05-04 00:01:15.055397 | orchestrator | 00:01:15.055 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-04 00:01:15.055428 | orchestrator | 00:01:15.055 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-04 00:01:15.055474 | orchestrator | 00:01:15.055 STDOUT terraform:  + content = (known after apply) 2025-05-04 00:01:15.055528 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-04 00:01:15.055564 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-04 00:01:15.055616 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-04 00:01:15.055654 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-04 00:01:15.055706 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-04 00:01:15.055741 | orchestrator | 00:01:15.055 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-04 00:01:15.055772 | orchestrator | 00:01:15.055 STDOUT terraform:  + directory_permission = "0777" 2025-05-04 00:01:15.055804 | orchestrator | 00:01:15.055 STDOUT terraform:  + file_permission = "0644" 2025-05-04 00:01:15.055843 | orchestrator | 00:01:15.055 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-04 00:01:15.055887 | orchestrator | 00:01:15.055 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.055904 | orchestrator | 00:01:15.055 STDOUT terraform:  } 2025-05-04 00:01:15.055935 | orchestrator | 00:01:15.055 STDOUT terraform:  # local_file.inventory will be created 2025-05-04 00:01:15.055966 | orchestrator | 00:01:15.055 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-04 00:01:15.056030 | orchestrator | 00:01:15.055 STDOUT terraform:  + content = (known after apply) 2025-05-04 00:01:15.056066 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-04 00:01:15.056117 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-04 00:01:15.056155 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-04 00:01:15.056207 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-04 00:01:15.056242 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-04 00:01:15.056294 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-04 00:01:15.056317 | orchestrator | 00:01:15.056 STDOUT terraform:  + directory_permission = "0777" 2025-05-04 00:01:15.056347 | orchestrator | 00:01:15.056 STDOUT terraform:  + file_permission = "0644" 2025-05-04 00:01:15.056394 | orchestrator | 00:01:15.056 STDOUT terraform:  + filename = "inventory.ci" 2025-05-04 00:01:15.056432 | orchestrator | 00:01:15.056 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.056460 | orchestrator | 00:01:15.056 STDOUT terraform:  } 2025-05-04 00:01:15.056487 | orchestrator | 00:01:15.056 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-04 00:01:15.056523 | orchestrator | 00:01:15.056 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-04 00:01:15.056564 | orchestrator | 00:01:15.056 STDOUT terraform:  + content = (sensitive value) 2025-05-04 00:01:15.056607 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-04 00:01:15.056653 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-04 00:01:15.056704 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-04 00:01:15.056744 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-04 00:01:15.056792 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-04 00:01:15.056829 | orchestrator | 00:01:15.056 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-04 00:01:15.056860 | orchestrator | 00:01:15.056 STDOUT terraform:  + directory_permission = "0700" 2025-05-04 00:01:15.056890 | orchestrator | 00:01:15.056 STDOUT terraform:  + file_permission = "0600" 2025-05-04 00:01:15.056929 | orchestrator | 00:01:15.056 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-04 00:01:15.056975 | orchestrator | 00:01:15.056 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.056993 | orchestrator | 00:01:15.056 STDOUT terraform:  } 2025-05-04 00:01:15.057057 | orchestrator | 00:01:15.056 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-04 00:01:15.057079 | orchestrator | 00:01:15.057 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-04 00:01:15.057120 | orchestrator | 00:01:15.057 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.057179 | orchestrator | 00:01:15.057 STDOUT terraform:  } 2025-05-04 00:01:15.057188 | orchestrator | 00:01:15.057 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-04 00:01:15.057238 | orchestrator | 00:01:15.057 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-04 00:01:15.057277 | orchestrator | 00:01:15.057 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.057313 | orchestrator | 00:01:15.057 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.057344 | orchestrator | 00:01:15.057 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.057394 | orchestrator | 00:01:15.057 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.057421 | orchestrator | 00:01:15.057 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.057476 | orchestrator | 00:01:15.057 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-04 00:01:15.057509 | orchestrator | 00:01:15.057 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.057544 | orchestrator | 00:01:15.057 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.057563 | orchestrator | 00:01:15.057 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.057580 | orchestrator | 00:01:15.057 STDOUT terraform:  } 2025-05-04 00:01:15.057668 | orchestrator | 00:01:15.057 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-04 00:01:15.057725 | orchestrator | 00:01:15.057 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-04 00:01:15.057764 | orchestrator | 00:01:15.057 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.057796 | orchestrator | 00:01:15.057 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.057832 | orchestrator | 00:01:15.057 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.057876 | orchestrator | 00:01:15.057 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.057910 | orchestrator | 00:01:15.057 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.057960 | orchestrator | 00:01:15.057 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-04 00:01:15.057998 | orchestrator | 00:01:15.057 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.058068 | orchestrator | 00:01:15.057 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.058095 | orchestrator | 00:01:15.058 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.058121 | orchestrator | 00:01:15.058 STDOUT terraform:  } 2025-05-04 00:01:15.058172 | orchestrator | 00:01:15.058 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-04 00:01:15.058229 | orchestrator | 00:01:15.058 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-04 00:01:15.058270 | orchestrator | 00:01:15.058 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.058298 | orchestrator | 00:01:15.058 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.058337 | orchestrator | 00:01:15.058 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.058384 | orchestrator | 00:01:15.058 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.058414 | orchestrator | 00:01:15.058 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.058463 | orchestrator | 00:01:15.058 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-04 00:01:15.058505 | orchestrator | 00:01:15.058 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.058542 | orchestrator | 00:01:15.058 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.058550 | orchestrator | 00:01:15.058 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.058568 | orchestrator | 00:01:15.058 STDOUT terraform:  } 2025-05-04 00:01:15.058622 | orchestrator | 00:01:15.058 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-04 00:01:15.058673 | orchestrator | 00:01:15.058 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-04 00:01:15.058709 | orchestrator | 00:01:15.058 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.058732 | orchestrator | 00:01:15.058 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.058768 | orchestrator | 00:01:15.058 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.058802 | orchestrator | 00:01:15.058 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.058836 | orchestrator | 00:01:15.058 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.058881 | orchestrator | 00:01:15.058 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-04 00:01:15.058917 | orchestrator | 00:01:15.058 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.058940 | orchestrator | 00:01:15.058 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.058965 | orchestrator | 00:01:15.058 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.058973 | orchestrator | 00:01:15.058 STDOUT terraform:  } 2025-05-04 00:01:15.059041 | orchestrator | 00:01:15.058 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-04 00:01:15.059094 | orchestrator | 00:01:15.059 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-04 00:01:15.059135 | orchestrator | 00:01:15.059 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.059154 | orchestrator | 00:01:15.059 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.059187 | orchestrator | 00:01:15.059 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.059222 | orchestrator | 00:01:15.059 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.059257 | orchestrator | 00:01:15.059 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.059301 | orchestrator | 00:01:15.059 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-04 00:01:15.059335 | orchestrator | 00:01:15.059 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.059358 | orchestrator | 00:01:15.059 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.059382 | orchestrator | 00:01:15.059 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.059390 | orchestrator | 00:01:15.059 STDOUT terraform:  } 2025-05-04 00:01:15.059446 | orchestrator | 00:01:15.059 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-04 00:01:15.059499 | orchestrator | 00:01:15.059 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-04 00:01:15.059533 | orchestrator | 00:01:15.059 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.059557 | orchestrator | 00:01:15.059 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.059592 | orchestrator | 00:01:15.059 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.059629 | orchestrator | 00:01:15.059 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.059663 | orchestrator | 00:01:15.059 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.059707 | orchestrator | 00:01:15.059 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-04 00:01:15.059742 | orchestrator | 00:01:15.059 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.059764 | orchestrator | 00:01:15.059 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.059789 | orchestrator | 00:01:15.059 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.059796 | orchestrator | 00:01:15.059 STDOUT terraform:  } 2025-05-04 00:01:15.059855 | orchestrator | 00:01:15.059 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-04 00:01:15.059905 | orchestrator | 00:01:15.059 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-04 00:01:15.059942 | orchestrator | 00:01:15.059 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.059966 | orchestrator | 00:01:15.059 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.060003 | orchestrator | 00:01:15.059 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.060048 | orchestrator | 00:01:15.059 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.060082 | orchestrator | 00:01:15.060 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.060145 | orchestrator | 00:01:15.060 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-04 00:01:15.060155 | orchestrator | 00:01:15.060 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.060186 | orchestrator | 00:01:15.060 STDOUT terraform:  + size = 80 2025-05-04 00:01:15.060204 | orchestrator | 00:01:15.060 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.060211 | orchestrator | 00:01:15.060 STDOUT terraform:  } 2025-05-04 00:01:15.060264 | orchestrator | 00:01:15.060 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-04 00:01:15.060314 | orchestrator | 00:01:15.060 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.060350 | orchestrator | 00:01:15.060 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.060375 | orchestrator | 00:01:15.060 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.060414 | orchestrator | 00:01:15.060 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.060446 | orchestrator | 00:01:15.060 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.060490 | orchestrator | 00:01:15.060 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-05-04 00:01:15.060524 | orchestrator | 00:01:15.060 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.060549 | orchestrator | 00:01:15.060 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.060571 | orchestrator | 00:01:15.060 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.060586 | orchestrator | 00:01:15.060 STDOUT terraform:  } 2025-05-04 00:01:15.060636 | orchestrator | 00:01:15.060 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-04 00:01:15.060687 | orchestrator | 00:01:15.060 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.060721 | orchestrator | 00:01:15.060 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.060744 | orchestrator | 00:01:15.060 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.060780 | orchestrator | 00:01:15.060 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.060815 | orchestrator | 00:01:15.060 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.060858 | orchestrator | 00:01:15.060 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-05-04 00:01:15.060893 | orchestrator | 00:01:15.060 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.060917 | orchestrator | 00:01:15.060 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.060941 | orchestrator | 00:01:15.060 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.060948 | orchestrator | 00:01:15.060 STDOUT terraform:  } 2025-05-04 00:01:15.061003 | orchestrator | 00:01:15.060 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-04 00:01:15.061078 | orchestrator | 00:01:15.060 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.061112 | orchestrator | 00:01:15.061 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.061137 | orchestrator | 00:01:15.061 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.061185 | orchestrator | 00:01:15.061 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.061211 | orchestrator | 00:01:15.061 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.061253 | orchestrator | 00:01:15.061 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-05-04 00:01:15.061287 | orchestrator | 00:01:15.061 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.061310 | orchestrator | 00:01:15.061 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.061331 | orchestrator | 00:01:15.061 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.061347 | orchestrator | 00:01:15.061 STDOUT terraform:  } 2025-05-04 00:01:15.061393 | orchestrator | 00:01:15.061 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-04 00:01:15.061437 | orchestrator | 00:01:15.061 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.061469 | orchestrator | 00:01:15.061 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.061492 | orchestrator | 00:01:15.061 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.061524 | orchestrator | 00:01:15.061 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.061556 | orchestrator | 00:01:15.061 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.061595 | orchestrator | 00:01:15.061 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-04 00:01:15.061627 | orchestrator | 00:01:15.061 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.061650 | orchestrator | 00:01:15.061 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.061674 | orchestrator | 00:01:15.061 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.061682 | orchestrator | 00:01:15.061 STDOUT terraform:  } 2025-05-04 00:01:15.061731 | orchestrator | 00:01:15.061 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-04 00:01:15.061791 | orchestrator | 00:01:15.061 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.061830 | orchestrator | 00:01:15.061 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.061853 | orchestrator | 00:01:15.061 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.061887 | orchestrator | 00:01:15.061 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.061918 | orchestrator | 00:01:15.061 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.061957 | orchestrator | 00:01:15.061 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-04 00:01:15.061991 | orchestrator | 00:01:15.061 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.062047 | orchestrator | 00:01:15.061 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.062076 | orchestrator | 00:01:15.062 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.062094 | orchestrator | 00:01:15.062 STDOUT terraform:  } 2025-05-04 00:01:15.062143 | orchestrator | 00:01:15.062 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-04 00:01:15.062189 | orchestrator | 00:01:15.062 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.062222 | orchestrator | 00:01:15.062 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.062244 | orchestrator | 00:01:15.062 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.062278 | orchestrator | 00:01:15.062 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.062313 | orchestrator | 00:01:15.062 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.062350 | orchestrator | 00:01:15.062 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-04 00:01:15.062383 | orchestrator | 00:01:15.062 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.062407 | orchestrator | 00:01:15.062 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.062429 | orchestrator | 00:01:15.062 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.062436 | orchestrator | 00:01:15.062 STDOUT terraform:  } 2025-05-04 00:01:15.062484 | orchestrator | 00:01:15.062 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-04 00:01:15.062529 | orchestrator | 00:01:15.062 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.062561 | orchestrator | 00:01:15.062 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.062582 | orchestrator | 00:01:15.062 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.062614 | orchestrator | 00:01:15.062 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.062646 | orchestrator | 00:01:15.062 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.062686 | orchestrator | 00:01:15.062 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-05-04 00:01:15.062721 | orchestrator | 00:01:15.062 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.062743 | orchestrator | 00:01:15.062 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.062776 | orchestrator | 00:01:15.062 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.062820 | orchestrator | 00:01:15.062 STDOUT terraform:  } 2025-05-04 00:01:15.062828 | orchestrator | 00:01:15.062 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-04 00:01:15.062864 | orchestrator | 00:01:15.062 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.062898 | orchestrator | 00:01:15.062 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.062918 | orchestrator | 00:01:15.062 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.062951 | orchestrator | 00:01:15.062 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.062983 | orchestrator | 00:01:15.062 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.063052 | orchestrator | 00:01:15.062 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-05-04 00:01:15.063086 | orchestrator | 00:01:15.063 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.063108 | orchestrator | 00:01:15.063 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.063129 | orchestrator | 00:01:15.063 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.063137 | orchestrator | 00:01:15.063 STDOUT terraform:  } 2025-05-04 00:01:15.063185 | orchestrator | 00:01:15.063 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-04 00:01:15.063229 | orchestrator | 00:01:15.063 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.063262 | orchestrator | 00:01:15.063 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.063283 | orchestrator | 00:01:15.063 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.063315 | orchestrator | 00:01:15.063 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.063347 | orchestrator | 00:01:15.063 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.063386 | orchestrator | 00:01:15.063 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-05-04 00:01:15.063418 | orchestrator | 00:01:15.063 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.063440 | orchestrator | 00:01:15.063 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.063462 | orchestrator | 00:01:15.063 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.063470 | orchestrator | 00:01:15.063 STDOUT terraform:  } 2025-05-04 00:01:15.063519 | orchestrator | 00:01:15.063 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-05-04 00:01:15.063564 | orchestrator | 00:01:15.063 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.063596 | orchestrator | 00:01:15.063 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.063618 | orchestrator | 00:01:15.063 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.063650 | orchestrator | 00:01:15.063 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.063680 | orchestrator | 00:01:15.063 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.063719 | orchestrator | 00:01:15.063 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-05-04 00:01:15.063750 | orchestrator | 00:01:15.063 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.063772 | orchestrator | 00:01:15.063 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.063795 | orchestrator | 00:01:15.063 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.063802 | orchestrator | 00:01:15.063 STDOUT terraform:  } 2025-05-04 00:01:15.063848 | orchestrator | 00:01:15.063 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-05-04 00:01:15.063890 | orchestrator | 00:01:15.063 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.063922 | orchestrator | 00:01:15.063 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.063942 | orchestrator | 00:01:15.063 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.063973 | orchestrator | 00:01:15.063 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.064005 | orchestrator | 00:01:15.063 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.064055 | orchestrator | 00:01:15.064 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-05-04 00:01:15.064086 | orchestrator | 00:01:15.064 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.064107 | orchestrator | 00:01:15.064 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.064129 | orchestrator | 00:01:15.064 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.064136 | orchestrator | 00:01:15.064 STDOUT terraform:  } 2025-05-04 00:01:15.064183 | orchestrator | 00:01:15.064 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-05-04 00:01:15.064226 | orchestrator | 00:01:15.064 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.064255 | orchestrator | 00:01:15.064 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.064276 | orchestrator | 00:01:15.064 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.064321 | orchestrator | 00:01:15.064 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.064328 | orchestrator | 00:01:15.064 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.064367 | orchestrator | 00:01:15.064 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-05-04 00:01:15.064399 | orchestrator | 00:01:15.064 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.064418 | orchestrator | 00:01:15.064 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.064439 | orchestrator | 00:01:15.064 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.064446 | orchestrator | 00:01:15.064 STDOUT terraform:  } 2025-05-04 00:01:15.064492 | orchestrator | 00:01:15.064 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-05-04 00:01:15.064534 | orchestrator | 00:01:15.064 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.064566 | orchestrator | 00:01:15.064 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.064587 | orchestrator | 00:01:15.064 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.064618 | orchestrator | 00:01:15.064 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.064649 | orchestrator | 00:01:15.064 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.064686 | orchestrator | 00:01:15.064 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-05-04 00:01:15.064716 | orchestrator | 00:01:15.064 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.064737 | orchestrator | 00:01:15.064 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.064758 | orchestrator | 00:01:15.064 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.064765 | orchestrator | 00:01:15.064 STDOUT terraform:  } 2025-05-04 00:01:15.064811 | orchestrator | 00:01:15.064 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-05-04 00:01:15.064854 | orchestrator | 00:01:15.064 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.064885 | orchestrator | 00:01:15.064 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.064906 | orchestrator | 00:01:15.064 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.064938 | orchestrator | 00:01:15.064 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.064969 | orchestrator | 00:01:15.064 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.065021 | orchestrator | 00:01:15.064 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-05-04 00:01:15.065050 | orchestrator | 00:01:15.065 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.065072 | orchestrator | 00:01:15.065 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.065093 | orchestrator | 00:01:15.065 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.065100 | orchestrator | 00:01:15.065 STDOUT terraform:  } 2025-05-04 00:01:15.065148 | orchestrator | 00:01:15.065 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-05-04 00:01:15.065191 | orchestrator | 00:01:15.065 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.065230 | orchestrator | 00:01:15.065 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.065248 | orchestrator | 00:01:15.065 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.065295 | orchestrator | 00:01:15.065 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.065314 | orchestrator | 00:01:15.065 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.065351 | orchestrator | 00:01:15.065 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-05-04 00:01:15.065382 | orchestrator | 00:01:15.065 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.065404 | orchestrator | 00:01:15.065 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.065428 | orchestrator | 00:01:15.065 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.065435 | orchestrator | 00:01:15.065 STDOUT terraform:  } 2025-05-04 00:01:15.065484 | orchestrator | 00:01:15.065 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-05-04 00:01:15.065526 | orchestrator | 00:01:15.065 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.065557 | orchestrator | 00:01:15.065 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.065578 | orchestrator | 00:01:15.065 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.065610 | orchestrator | 00:01:15.065 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.065644 | orchestrator | 00:01:15.065 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.065679 | orchestrator | 00:01:15.065 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-05-04 00:01:15.065713 | orchestrator | 00:01:15.065 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.065731 | orchestrator | 00:01:15.065 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.065751 | orchestrator | 00:01:15.065 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.065759 | orchestrator | 00:01:15.065 STDOUT terraform:  } 2025-05-04 00:01:15.065806 | orchestrator | 00:01:15.065 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-05-04 00:01:15.065850 | orchestrator | 00:01:15.065 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.065879 | orchestrator | 00:01:15.065 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.065902 | orchestrator | 00:01:15.065 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.065934 | orchestrator | 00:01:15.065 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.065964 | orchestrator | 00:01:15.065 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.066003 | orchestrator | 00:01:15.065 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-05-04 00:01:15.066078 | orchestrator | 00:01:15.065 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.066098 | orchestrator | 00:01:15.066 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.066106 | orchestrator | 00:01:15.066 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.069110 | orchestrator | 00:01:15.066 STDOUT terraform:  } 2025-05-04 00:01:15.069148 | orchestrator | 00:01:15.066 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-05-04 00:01:15.069164 | orchestrator | 00:01:15.066 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-04 00:01:15.069170 | orchestrator | 00:01:15.066 STDOUT terraform:  + attachment = (known after apply) 2025-05-04 00:01:15.069175 | orchestrator | 00:01:15.066 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.069180 | orchestrator | 00:01:15.066 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.069186 | orchestrator | 00:01:15.066 STDOUT terraform:  + metadata = (known after apply) 2025-05-04 00:01:15.069191 | orchestrator | 00:01:15.066 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-05-04 00:01:15.069196 | orchestrator | 00:01:15.066 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.069201 | orchestrator | 00:01:15.066 STDOUT terraform:  + size = 20 2025-05-04 00:01:15.069206 | orchestrator | 00:01:15.066 STDOUT terraform:  + volume_type = "ssd" 2025-05-04 00:01:15.069211 | orchestrator | 00:01:15.066 STDOUT terraform:  } 2025-05-04 00:01:15.069220 | orchestrator | 00:01:15.066 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-04 00:01:15.069225 | orchestrator | 00:01:15.066 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-04 00:01:15.069230 | orchestrator | 00:01:15.066 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.069235 | orchestrator | 00:01:15.066 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.069240 | orchestrator | 00:01:15.066 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.069245 | orchestrator | 00:01:15.066 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.069250 | orchestrator | 00:01:15.066 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.069255 | orchestrator | 00:01:15.066 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.069260 | orchestrator | 00:01:15.066 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.069265 | orchestrator | 00:01:15.066 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.069271 | orchestrator | 00:01:15.066 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-04 00:01:15.069276 | orchestrator | 00:01:15.066 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.069281 | orchestrator | 00:01:15.066 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.069286 | orchestrator | 00:01:15.066 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.069291 | orchestrator | 00:01:15.066 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.069296 | orchestrator | 00:01:15.066 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.069302 | orchestrator | 00:01:15.066 STDOUT terraform:  + name = "testbed-manager" 2025-05-04 00:01:15.069307 | orchestrator | 00:01:15.066 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.069311 | orchestrator | 00:01:15.066 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.069316 | orchestrator | 00:01:15.066 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.069324 | orchestrator | 00:01:15.067 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.069330 | orchestrator | 00:01:15.067 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.069336 | orchestrator | 00:01:15.067 STDOUT terraform:  + user_data = (known after apply) 2025-05-04 00:01:15.069341 | orchestrator | 00:01:15.067 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.069346 | orchestrator | 00:01:15.067 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.069351 | orchestrator | 00:01:15.067 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.069363 | orchestrator | 00:01:15.067 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.069369 | orchestrator | 00:01:15.067 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.069374 | orchestrator | 00:01:15.067 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.069379 | orchestrator | 00:01:15.067 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.069384 | orchestrator | 00:01:15.067 STDOUT terraform:  } 2025-05-04 00:01:15.069389 | orchestrator | 00:01:15.067 STDOUT terraform:  + network { 2025-05-04 00:01:15.069394 | orchestrator | 00:01:15.067 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.069399 | orchestrator | 00:01:15.067 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.069405 | orchestrator | 00:01:15.067 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.069410 | orchestrator | 00:01:15.067 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.069415 | orchestrator | 00:01:15.067 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.069420 | orchestrator | 00:01:15.067 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.069425 | orchestrator | 00:01:15.067 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.069430 | orchestrator | 00:01:15.067 STDOUT terraform:  } 2025-05-04 00:01:15.069435 | orchestrator | 00:01:15.067 STDOUT terraform:  } 2025-05-04 00:01:15.069526 | orchestrator | 00:01:15.067 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-04 00:01:15.069533 | orchestrator | 00:01:15.067 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-04 00:01:15.069538 | orchestrator | 00:01:15.067 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.069543 | orchestrator | 00:01:15.067 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.069556 | orchestrator | 00:01:15.067 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.069561 | orchestrator | 00:01:15.067 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.069566 | orchestrator | 00:01:15.067 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.069571 | orchestrator | 00:01:15.067 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.069576 | orchestrator | 00:01:15.067 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.069585 | orchestrator | 00:01:15.067 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.069590 | orchestrator | 00:01:15.067 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-04 00:01:15.069595 | orchestrator | 00:01:15.067 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.069600 | orchestrator | 00:01:15.067 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.069605 | orchestrator | 00:01:15.067 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.069610 | orchestrator | 00:01:15.067 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.069615 | orchestrator | 00:01:15.067 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.069620 | orchestrator | 00:01:15.067 STDOUT terraform:  + name = "testbed-node-0" 2025-05-04 00:01:15.069625 | orchestrator | 00:01:15.067 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.069630 | orchestrator | 00:01:15.068 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.069635 | orchestrator | 00:01:15.068 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.069640 | orchestrator | 00:01:15.068 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.069645 | orchestrator | 00:01:15.068 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.069650 | orchestrator | 00:01:15.068 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-04 00:01:15.069661 | orchestrator | 00:01:15.068 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.069667 | orchestrator | 00:01:15.068 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.069675 | orchestrator | 00:01:15.068 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.069680 | orchestrator | 00:01:15.068 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.069685 | orchestrator | 00:01:15.068 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.069690 | orchestrator | 00:01:15.068 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.069696 | orchestrator | 00:01:15.068 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.069701 | orchestrator | 00:01:15.068 STDOUT terraform:  } 2025-05-04 00:01:15.069706 | orchestrator | 00:01:15.068 STDOUT terraform:  + network { 2025-05-04 00:01:15.069711 | orchestrator | 00:01:15.068 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.069716 | orchestrator | 00:01:15.068 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.069721 | orchestrator | 00:01:15.068 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.069726 | orchestrator | 00:01:15.068 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.069730 | orchestrator | 00:01:15.068 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.069738 | orchestrator | 00:01:15.068 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.069743 | orchestrator | 00:01:15.068 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.069751 | orchestrator | 00:01:15.068 STDOUT terraform:  } 2025-05-04 00:01:15.069756 | orchestrator | 00:01:15.068 STDOUT terraform:  } 2025-05-04 00:01:15.069761 | orchestrator | 00:01:15.068 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-04 00:01:15.069766 | orchestrator | 00:01:15.068 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-04 00:01:15.069772 | orchestrator | 00:01:15.068 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.069777 | orchestrator | 00:01:15.068 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.069782 | orchestrator | 00:01:15.068 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.069787 | orchestrator | 00:01:15.068 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.069792 | orchestrator | 00:01:15.068 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.069798 | orchestrator | 00:01:15.068 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.069803 | orchestrator | 00:01:15.068 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.069808 | orchestrator | 00:01:15.068 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.069817 | orchestrator | 00:01:15.068 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-04 00:01:15.069822 | orchestrator | 00:01:15.068 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.069827 | orchestrator | 00:01:15.068 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.069832 | orchestrator | 00:01:15.068 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.069837 | orchestrator | 00:01:15.068 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.069842 | orchestrator | 00:01:15.068 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.069847 | orchestrator | 00:01:15.069 STDOUT terraform:  + name = "testbed-node-1" 2025-05-04 00:01:15.069852 | orchestrator | 00:01:15.069 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.069857 | orchestrator | 00:01:15.069 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.069862 | orchestrator | 00:01:15.069 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.069869 | orchestrator | 00:01:15.069 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.069875 | orchestrator | 00:01:15.069 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.069880 | orchestrator | 00:01:15.069 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-04 00:01:15.069885 | orchestrator | 00:01:15.069 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.069890 | orchestrator | 00:01:15.069 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.069895 | orchestrator | 00:01:15.069 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.069900 | orchestrator | 00:01:15.069 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.069905 | orchestrator | 00:01:15.069 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.069914 | orchestrator | 00:01:15.069 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.069919 | orchestrator | 00:01:15.069 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.069924 | orchestrator | 00:01:15.069 STDOUT terraform:  } 2025-05-04 00:01:15.069929 | orchestrator | 00:01:15.069 STDOUT terraform:  + network { 2025-05-04 00:01:15.069934 | orchestrator | 00:01:15.069 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.069939 | orchestrator | 00:01:15.069 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.069944 | orchestrator | 00:01:15.069 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.069949 | orchestrator | 00:01:15.069 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.069954 | orchestrator | 00:01:15.069 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.069959 | orchestrator | 00:01:15.069 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.069964 | orchestrator | 00:01:15.069 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.069969 | orchestrator | 00:01:15.069 STDOUT terraform:  } 2025-05-04 00:01:15.069974 | orchestrator | 00:01:15.069 STDOUT terraform:  } 2025-05-04 00:01:15.069979 | orchestrator | 00:01:15.069 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-04 00:01:15.069984 | orchestrator | 00:01:15.069 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-04 00:01:15.069989 | orchestrator | 00:01:15.069 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.069994 | orchestrator | 00:01:15.069 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.070001 | orchestrator | 00:01:15.069 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.070006 | orchestrator | 00:01:15.069 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.070066 | orchestrator | 00:01:15.069 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.070072 | orchestrator | 00:01:15.069 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.070077 | orchestrator | 00:01:15.069 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.070082 | orchestrator | 00:01:15.069 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.070087 | orchestrator | 00:01:15.069 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-04 00:01:15.070092 | orchestrator | 00:01:15.069 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.070099 | orchestrator | 00:01:15.069 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.070105 | orchestrator | 00:01:15.070 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.070140 | orchestrator | 00:01:15.070 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.070164 | orchestrator | 00:01:15.070 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.070195 | orchestrator | 00:01:15.070 STDOUT terraform:  + name = "testbed-node-2" 2025-05-04 00:01:15.070219 | orchestrator | 00:01:15.070 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.070256 | orchestrator | 00:01:15.070 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.070291 | orchestrator | 00:01:15.070 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.070316 | orchestrator | 00:01:15.070 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.070351 | orchestrator | 00:01:15.070 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.070400 | orchestrator | 00:01:15.070 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-04 00:01:15.070417 | orchestrator | 00:01:15.070 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.070442 | orchestrator | 00:01:15.070 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.070471 | orchestrator | 00:01:15.070 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.070500 | orchestrator | 00:01:15.070 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.070529 | orchestrator | 00:01:15.070 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.070559 | orchestrator | 00:01:15.070 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.070598 | orchestrator | 00:01:15.070 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.070612 | orchestrator | 00:01:15.070 STDOUT terraform:  } 2025-05-04 00:01:15.070627 | orchestrator | 00:01:15.070 STDOUT terraform:  + network { 2025-05-04 00:01:15.070648 | orchestrator | 00:01:15.070 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.070679 | orchestrator | 00:01:15.070 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.070710 | orchestrator | 00:01:15.070 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.070740 | orchestrator | 00:01:15.070 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.070772 | orchestrator | 00:01:15.070 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.070804 | orchestrator | 00:01:15.070 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.070839 | orchestrator | 00:01:15.070 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.070847 | orchestrator | 00:01:15.070 STDOUT terraform:  } 2025-05-04 00:01:15.070864 | orchestrator | 00:01:15.070 STDOUT terraform:  } 2025-05-04 00:01:15.070905 | orchestrator | 00:01:15.070 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-04 00:01:15.070947 | orchestrator | 00:01:15.070 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-04 00:01:15.070982 | orchestrator | 00:01:15.070 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.071036 | orchestrator | 00:01:15.070 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.071065 | orchestrator | 00:01:15.071 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.071106 | orchestrator | 00:01:15.071 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.071135 | orchestrator | 00:01:15.071 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.071151 | orchestrator | 00:01:15.071 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.071188 | orchestrator | 00:01:15.071 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.071229 | orchestrator | 00:01:15.071 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.071251 | orchestrator | 00:01:15.071 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-04 00:01:15.071271 | orchestrator | 00:01:15.071 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.071306 | orchestrator | 00:01:15.071 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.071380 | orchestrator | 00:01:15.071 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.071388 | orchestrator | 00:01:15.071 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.071395 | orchestrator | 00:01:15.071 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.071425 | orchestrator | 00:01:15.071 STDOUT terraform:  + name = "testbed-node-3" 2025-05-04 00:01:15.071450 | orchestrator | 00:01:15.071 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.071484 | orchestrator | 00:01:15.071 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.071519 | orchestrator | 00:01:15.071 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.071543 | orchestrator | 00:01:15.071 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.071579 | orchestrator | 00:01:15.071 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.071629 | orchestrator | 00:01:15.071 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-04 00:01:15.071645 | orchestrator | 00:01:15.071 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.071669 | orchestrator | 00:01:15.071 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.071697 | orchestrator | 00:01:15.071 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.071727 | orchestrator | 00:01:15.071 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.071755 | orchestrator | 00:01:15.071 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.071785 | orchestrator | 00:01:15.071 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.071824 | orchestrator | 00:01:15.071 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.071835 | orchestrator | 00:01:15.071 STDOUT terraform:  } 2025-05-04 00:01:15.071846 | orchestrator | 00:01:15.071 STDOUT terraform:  + network { 2025-05-04 00:01:15.071866 | orchestrator | 00:01:15.071 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.071896 | orchestrator | 00:01:15.071 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.071928 | orchestrator | 00:01:15.071 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.071961 | orchestrator | 00:01:15.071 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.071992 | orchestrator | 00:01:15.071 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.072053 | orchestrator | 00:01:15.071 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.072062 | orchestrator | 00:01:15.072 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.072078 | orchestrator | 00:01:15.072 STDOUT terraform:  } 2025-05-04 00:01:15.072085 | orchestrator | 00:01:15.072 STDOUT terraform:  } 2025-05-04 00:01:15.072131 | orchestrator | 00:01:15.072 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-04 00:01:15.072172 | orchestrator | 00:01:15.072 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-04 00:01:15.072208 | orchestrator | 00:01:15.072 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.072242 | orchestrator | 00:01:15.072 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.072278 | orchestrator | 00:01:15.072 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.072313 | orchestrator | 00:01:15.072 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.072337 | orchestrator | 00:01:15.072 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.072388 | orchestrator | 00:01:15.072 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.072423 | orchestrator | 00:01:15.072 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.072430 | orchestrator | 00:01:15.072 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.072454 | orchestrator | 00:01:15.072 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-04 00:01:15.072478 | orchestrator | 00:01:15.072 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.072514 | orchestrator | 00:01:15.072 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.072549 | orchestrator | 00:01:15.072 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.072584 | orchestrator | 00:01:15.072 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.072609 | orchestrator | 00:01:15.072 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.072640 | orchestrator | 00:01:15.072 STDOUT terraform:  + name = "testbed-node-4" 2025-05-04 00:01:15.072665 | orchestrator | 00:01:15.072 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.072699 | orchestrator | 00:01:15.072 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.072734 | orchestrator | 00:01:15.072 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.072757 | orchestrator | 00:01:15.072 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.072793 | orchestrator | 00:01:15.072 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.072843 | orchestrator | 00:01:15.072 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-04 00:01:15.072859 | orchestrator | 00:01:15.072 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.072884 | orchestrator | 00:01:15.072 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.072913 | orchestrator | 00:01:15.072 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.072942 | orchestrator | 00:01:15.072 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.072970 | orchestrator | 00:01:15.072 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.073000 | orchestrator | 00:01:15.072 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.073049 | orchestrator | 00:01:15.072 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.073057 | orchestrator | 00:01:15.073 STDOUT terraform:  } 2025-05-04 00:01:15.073074 | orchestrator | 00:01:15.073 STDOUT terraform:  + network { 2025-05-04 00:01:15.073095 | orchestrator | 00:01:15.073 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.073131 | orchestrator | 00:01:15.073 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.073159 | orchestrator | 00:01:15.073 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.073191 | orchestrator | 00:01:15.073 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.073222 | orchestrator | 00:01:15.073 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.073254 | orchestrator | 00:01:15.073 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.073287 | orchestrator | 00:01:15.073 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.073295 | orchestrator | 00:01:15.073 STDOUT terraform:  } 2025-05-04 00:01:15.073311 | orchestrator | 00:01:15.073 STDOUT terraform:  } 2025-05-04 00:01:15.073355 | orchestrator | 00:01:15.073 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-04 00:01:15.073397 | orchestrator | 00:01:15.073 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-04 00:01:15.073431 | orchestrator | 00:01:15.073 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-04 00:01:15.073466 | orchestrator | 00:01:15.073 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-04 00:01:15.073501 | orchestrator | 00:01:15.073 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-04 00:01:15.073536 | orchestrator | 00:01:15.073 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.073559 | orchestrator | 00:01:15.073 STDOUT terraform:  + availability_zone = "nova" 2025-05-04 00:01:15.073580 | orchestrator | 00:01:15.073 STDOUT terraform:  + config_drive = true 2025-05-04 00:01:15.073616 | orchestrator | 00:01:15.073 STDOUT terraform:  + created = (known after apply) 2025-05-04 00:01:15.073651 | orchestrator | 00:01:15.073 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-04 00:01:15.073683 | orchestrator | 00:01:15.073 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-04 00:01:15.073704 | orchestrator | 00:01:15.073 STDOUT terraform:  + force_delete = false 2025-05-04 00:01:15.073739 | orchestrator | 00:01:15.073 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.073774 | orchestrator | 00:01:15.073 STDOUT terraform:  + image_id = (known after apply) 2025-05-04 00:01:15.073808 | orchestrator | 00:01:15.073 STDOUT terraform:  + image_name = (known after apply) 2025-05-04 00:01:15.073833 | orchestrator | 00:01:15.073 STDOUT terraform:  + key_pair = "testbed" 2025-05-04 00:01:15.073865 | orchestrator | 00:01:15.073 STDOUT terraform:  + name = "testbed-node-5" 2025-05-04 00:01:15.073890 | orchestrator | 00:01:15.073 STDOUT terraform:  + power_state = "active" 2025-05-04 00:01:15.073925 | orchestrator | 00:01:15.073 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.073959 | orchestrator | 00:01:15.073 STDOUT terraform:  + security_groups = (known after apply) 2025-05-04 00:01:15.073982 | orchestrator | 00:01:15.073 STDOUT terraform:  + stop_before_destroy = false 2025-05-04 00:01:15.074053 | orchestrator | 00:01:15.073 STDOUT terraform:  + updated = (known after apply) 2025-05-04 00:01:15.074101 | orchestrator | 00:01:15.074 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-04 00:01:15.074117 | orchestrator | 00:01:15.074 STDOUT terraform:  + block_device { 2025-05-04 00:01:15.074142 | orchestrator | 00:01:15.074 STDOUT terraform:  + boot_index = 0 2025-05-04 00:01:15.074170 | orchestrator | 00:01:15.074 STDOUT terraform:  + delete_on_termination = false 2025-05-04 00:01:15.074200 | orchestrator | 00:01:15.074 STDOUT terraform:  + destination_type = "volume" 2025-05-04 00:01:15.074229 | orchestrator | 00:01:15.074 STDOUT terraform:  + multiattach = false 2025-05-04 00:01:15.074259 | orchestrator | 00:01:15.074 STDOUT terraform:  + source_type = "volume" 2025-05-04 00:01:15.074297 | orchestrator | 00:01:15.074 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.074304 | orchestrator | 00:01:15.074 STDOUT terraform:  } 2025-05-04 00:01:15.074322 | orchestrator | 00:01:15.074 STDOUT terraform:  + network { 2025-05-04 00:01:15.074343 | orchestrator | 00:01:15.074 STDOUT terraform:  + access_network = false 2025-05-04 00:01:15.074375 | orchestrator | 00:01:15.074 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-04 00:01:15.074405 | orchestrator | 00:01:15.074 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-04 00:01:15.074436 | orchestrator | 00:01:15.074 STDOUT terraform:  + mac = (known after apply) 2025-05-04 00:01:15.074469 | orchestrator | 00:01:15.074 STDOUT terraform:  + name = (known after apply) 2025-05-04 00:01:15.074502 | orchestrator | 00:01:15.074 STDOUT terraform:  + port = (known after apply) 2025-05-04 00:01:15.074533 | orchestrator | 00:01:15.074 STDOUT terraform:  + uuid = (known after apply) 2025-05-04 00:01:15.074541 | orchestrator | 00:01:15.074 STDOUT terraform:  } 2025-05-04 00:01:15.074558 | orchestrator | 00:01:15.074 STDOUT terraform:  } 2025-05-04 00:01:15.074598 | orchestrator | 00:01:15.074 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-04 00:01:15.074628 | orchestrator | 00:01:15.074 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-04 00:01:15.074656 | orchestrator | 00:01:15.074 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-04 00:01:15.074685 | orchestrator | 00:01:15.074 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.074706 | orchestrator | 00:01:15.074 STDOUT terraform:  + name = "testbed" 2025-05-04 00:01:15.074733 | orchestrator | 00:01:15.074 STDOUT terraform:  + private_key = (sensitive value) 2025-05-04 00:01:15.074761 | orchestrator | 00:01:15.074 STDOUT terraform:  + public_key = (known after apply) 2025-05-04 00:01:15.074791 | orchestrator | 00:01:15.074 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.074819 | orchestrator | 00:01:15.074 STDOUT terraform:  + user_id = (known after apply) 2025-05-04 00:01:15.074827 | orchestrator | 00:01:15.074 STDOUT terraform:  } 2025-05-04 00:01:15.074879 | orchestrator | 00:01:15.074 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-04 00:01:15.074927 | orchestrator | 00:01:15.074 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.074956 | orchestrator | 00:01:15.074 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.074986 | orchestrator | 00:01:15.074 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.075067 | orchestrator | 00:01:15.074 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.075075 | orchestrator | 00:01:15.075 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.075080 | orchestrator | 00:01:15.075 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.075087 | orchestrator | 00:01:15.075 STDOUT terraform:  } 2025-05-04 00:01:15.075129 | orchestrator | 00:01:15.075 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-04 00:01:15.075177 | orchestrator | 00:01:15.075 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.075206 | orchestrator | 00:01:15.075 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.075235 | orchestrator | 00:01:15.075 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.075263 | orchestrator | 00:01:15.075 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.075292 | orchestrator | 00:01:15.075 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.075327 | orchestrator | 00:01:15.075 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.075335 | orchestrator | 00:01:15.075 STDOUT terraform:  } 2025-05-04 00:01:15.075379 | orchestrator | 00:01:15.075 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-04 00:01:15.075427 | orchestrator | 00:01:15.075 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.075455 | orchestrator | 00:01:15.075 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.075484 | orchestrator | 00:01:15.075 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.075513 | orchestrator | 00:01:15.075 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.075542 | orchestrator | 00:01:15.075 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.075571 | orchestrator | 00:01:15.075 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.075583 | orchestrator | 00:01:15.075 STDOUT terraform:  } 2025-05-04 00:01:15.075629 | orchestrator | 00:01:15.075 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-04 00:01:15.075677 | orchestrator | 00:01:15.075 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.075705 | orchestrator | 00:01:15.075 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.075734 | orchestrator | 00:01:15.075 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.075763 | orchestrator | 00:01:15.075 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.075792 | orchestrator | 00:01:15.075 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.075820 | orchestrator | 00:01:15.075 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.075827 | orchestrator | 00:01:15.075 STDOUT terraform:  } 2025-05-04 00:01:15.075879 | orchestrator | 00:01:15.075 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-04 00:01:15.075927 | orchestrator | 00:01:15.075 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.075955 | orchestrator | 00:01:15.075 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.075984 | orchestrator | 00:01:15.075 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.076023 | orchestrator | 00:01:15.075 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.076050 | orchestrator | 00:01:15.076 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.076079 | orchestrator | 00:01:15.076 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.076086 | orchestrator | 00:01:15.076 STDOUT terraform:  } 2025-05-04 00:01:15.076137 | orchestrator | 00:01:15.076 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-04 00:01:15.076186 | orchestrator | 00:01:15.076 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.076214 | orchestrator | 00:01:15.076 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.076244 | orchestrator | 00:01:15.076 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.076272 | orchestrator | 00:01:15.076 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.076300 | orchestrator | 00:01:15.076 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.076328 | orchestrator | 00:01:15.076 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.076336 | orchestrator | 00:01:15.076 STDOUT terraform:  } 2025-05-04 00:01:15.076388 | orchestrator | 00:01:15.076 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-04 00:01:15.076437 | orchestrator | 00:01:15.076 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.076468 | orchestrator | 00:01:15.076 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.076496 | orchestrator | 00:01:15.076 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.076524 | orchestrator | 00:01:15.076 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.076552 | orchestrator | 00:01:15.076 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.076581 | orchestrator | 00:01:15.076 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.076588 | orchestrator | 00:01:15.076 STDOUT terraform:  } 2025-05-04 00:01:15.076639 | orchestrator | 00:01:15.076 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-04 00:01:15.076689 | orchestrator | 00:01:15.076 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.076716 | orchestrator | 00:01:15.076 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.076744 | orchestrator | 00:01:15.076 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.076772 | orchestrator | 00:01:15.076 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.076804 | orchestrator | 00:01:15.076 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.076829 | orchestrator | 00:01:15.076 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.076836 | orchestrator | 00:01:15.076 STDOUT terraform:  } 2025-05-04 00:01:15.076887 | orchestrator | 00:01:15.076 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-04 00:01:15.076937 | orchestrator | 00:01:15.076 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.076966 | orchestrator | 00:01:15.076 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.076993 | orchestrator | 00:01:15.076 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.077030 | orchestrator | 00:01:15.076 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.077065 | orchestrator | 00:01:15.077 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.077093 | orchestrator | 00:01:15.077 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.077101 | orchestrator | 00:01:15.077 STDOUT terraform:  } 2025-05-04 00:01:15.077199 | orchestrator | 00:01:15.077 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-05-04 00:01:15.077224 | orchestrator | 00:01:15.077 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.077231 | orchestrator | 00:01:15.077 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.077253 | orchestrator | 00:01:15.077 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.077283 | orchestrator | 00:01:15.077 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.077312 | orchestrator | 00:01:15.077 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.077340 | orchestrator | 00:01:15.077 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.077348 | orchestrator | 00:01:15.077 STDOUT terraform:  } 2025-05-04 00:01:15.077401 | orchestrator | 00:01:15.077 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-05-04 00:01:15.077448 | orchestrator | 00:01:15.077 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.077477 | orchestrator | 00:01:15.077 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.077506 | orchestrator | 00:01:15.077 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.077534 | orchestrator | 00:01:15.077 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.077562 | orchestrator | 00:01:15.077 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.077590 | orchestrator | 00:01:15.077 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.077598 | orchestrator | 00:01:15.077 STDOUT terraform:  } 2025-05-04 00:01:15.077654 | orchestrator | 00:01:15.077 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-05-04 00:01:15.077701 | orchestrator | 00:01:15.077 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.077729 | orchestrator | 00:01:15.077 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.077759 | orchestrator | 00:01:15.077 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.077787 | orchestrator | 00:01:15.077 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.077817 | orchestrator | 00:01:15.077 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.077845 | orchestrator | 00:01:15.077 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.077852 | orchestrator | 00:01:15.077 STDOUT terraform:  } 2025-05-04 00:01:15.077904 | orchestrator | 00:01:15.077 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-05-04 00:01:15.077953 | orchestrator | 00:01:15.077 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.077981 | orchestrator | 00:01:15.077 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.078056 | orchestrator | 00:01:15.077 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.078066 | orchestrator | 00:01:15.078 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.078094 | orchestrator | 00:01:15.078 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.078124 | orchestrator | 00:01:15.078 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.078131 | orchestrator | 00:01:15.078 STDOUT terraform:  } 2025-05-04 00:01:15.078183 | orchestrator | 00:01:15.078 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-05-04 00:01:15.078230 | orchestrator | 00:01:15.078 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.078258 | orchestrator | 00:01:15.078 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.078289 | orchestrator | 00:01:15.078 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.078315 | orchestrator | 00:01:15.078 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.078344 | orchestrator | 00:01:15.078 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.078373 | orchestrator | 00:01:15.078 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.078386 | orchestrator | 00:01:15.078 STDOUT terraform:  } 2025-05-04 00:01:15.078431 | orchestrator | 00:01:15.078 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-05-04 00:01:15.078480 | orchestrator | 00:01:15.078 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.078508 | orchestrator | 00:01:15.078 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.078537 | orchestrator | 00:01:15.078 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.078566 | orchestrator | 00:01:15.078 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.078596 | orchestrator | 00:01:15.078 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.078623 | orchestrator | 00:01:15.078 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.078634 | orchestrator | 00:01:15.078 STDOUT terraform:  } 2025-05-04 00:01:15.078684 | orchestrator | 00:01:15.078 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-05-04 00:01:15.078734 | orchestrator | 00:01:15.078 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.078763 | orchestrator | 00:01:15.078 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.078792 | orchestrator | 00:01:15.078 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.078819 | orchestrator | 00:01:15.078 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.078849 | orchestrator | 00:01:15.078 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.078879 | orchestrator | 00:01:15.078 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.078886 | orchestrator | 00:01:15.078 STDOUT terraform:  } 2025-05-04 00:01:15.078937 | orchestrator | 00:01:15.078 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-05-04 00:01:15.078985 | orchestrator | 00:01:15.078 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.079028 | orchestrator | 00:01:15.078 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.079054 | orchestrator | 00:01:15.079 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.079082 | orchestrator | 00:01:15.079 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.079111 | orchestrator | 00:01:15.079 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.079152 | orchestrator | 00:01:15.079 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.079197 | orchestrator | 00:01:15.079 STDOUT terraform:  } 2025-05-04 00:01:15.079205 | orchestrator | 00:01:15.079 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-05-04 00:01:15.079246 | orchestrator | 00:01:15.079 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-04 00:01:15.079276 | orchestrator | 00:01:15.079 STDOUT terraform:  + device = (known after apply) 2025-05-04 00:01:15.079304 | orchestrator | 00:01:15.079 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.079333 | orchestrator | 00:01:15.079 STDOUT terraform:  + instance_id = (known after apply) 2025-05-04 00:01:15.079362 | orchestrator | 00:01:15.079 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.079389 | orchestrator | 00:01:15.079 STDOUT terraform:  + volume_id = (known after apply) 2025-05-04 00:01:15.079396 | orchestrator | 00:01:15.079 STDOUT terraform:  } 2025-05-04 00:01:15.079455 | orchestrator | 00:01:15.079 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-04 00:01:15.079511 | orchestrator | 00:01:15.079 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-04 00:01:15.079540 | orchestrator | 00:01:15.079 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-04 00:01:15.079569 | orchestrator | 00:01:15.079 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-04 00:01:15.079597 | orchestrator | 00:01:15.079 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.079626 | orchestrator | 00:01:15.079 STDOUT terraform:  + port_id = (known after apply) 2025-05-04 00:01:15.079656 | orchestrator | 00:01:15.079 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.079664 | orchestrator | 00:01:15.079 STDOUT terraform:  } 2025-05-04 00:01:15.079708 | orchestrator | 00:01:15.079 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-04 00:01:15.079757 | orchestrator | 00:01:15.079 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-04 00:01:15.079781 | orchestrator | 00:01:15.079 STDOUT terraform:  + address = (known after apply) 2025-05-04 00:01:15.079806 | orchestrator | 00:01:15.079 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.079832 | orchestrator | 00:01:15.079 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-04 00:01:15.079856 | orchestrator | 00:01:15.079 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.079882 | orchestrator | 00:01:15.079 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-04 00:01:15.079907 | orchestrator | 00:01:15.079 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.079928 | orchestrator | 00:01:15.079 STDOUT terraform:  + pool = "public" 2025-05-04 00:01:15.079953 | orchestrator | 00:01:15.079 STDOUT terraform:  + port_id = (known after apply) 2025-05-04 00:01:15.079979 | orchestrator | 00:01:15.079 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.080003 | orchestrator | 00:01:15.079 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.080040 | orchestrator | 00:01:15.080 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.080048 | orchestrator | 00:01:15.080 STDOUT terraform:  } 2025-05-04 00:01:15.080093 | orchestrator | 00:01:15.080 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-04 00:01:15.080137 | orchestrator | 00:01:15.080 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-04 00:01:15.080174 | orchestrator | 00:01:15.080 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.080211 | orchestrator | 00:01:15.080 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.080233 | orchestrator | 00:01:15.080 STDOUT terraform:  + availability_zone_hints = [ 2025-05-04 00:01:15.080249 | orchestrator | 00:01:15.080 STDOUT terraform:  + "nova", 2025-05-04 00:01:15.080256 | orchestrator | 00:01:15.080 STDOUT terraform:  ] 2025-05-04 00:01:15.080294 | orchestrator | 00:01:15.080 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-04 00:01:15.080331 | orchestrator | 00:01:15.080 STDOUT terraform:  + external = (known after apply) 2025-05-04 00:01:15.080368 | orchestrator | 00:01:15.080 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.080406 | orchestrator | 00:01:15.080 STDOUT terraform:  + mtu = (known after apply) 2025-05-04 00:01:15.080444 | orchestrator | 00:01:15.080 STDOUT terraform:  + name = "net-testbed-management" 2025-05-04 00:01:15.080479 | orchestrator | 00:01:15.080 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.080516 | orchestrator | 00:01:15.080 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.080552 | orchestrator | 00:01:15.080 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.080592 | orchestrator | 00:01:15.080 STDOUT terraform:  + shared = (known after apply) 2025-05-04 00:01:15.080628 | orchestrator | 00:01:15.080 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.080664 | orchestrator | 00:01:15.080 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-04 00:01:15.080687 | orchestrator | 00:01:15.080 STDOUT terraform:  + segments (known after apply) 2025-05-04 00:01:15.080696 | orchestrator | 00:01:15.080 STDOUT terraform:  } 2025-05-04 00:01:15.080744 | orchestrator | 00:01:15.080 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-04 00:01:15.080790 | orchestrator | 00:01:15.080 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-04 00:01:15.080824 | orchestrator | 00:01:15.080 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.080861 | orchestrator | 00:01:15.080 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.080895 | orchestrator | 00:01:15.080 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.080932 | orchestrator | 00:01:15.080 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.080968 | orchestrator | 00:01:15.080 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.081005 | orchestrator | 00:01:15.080 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.081064 | orchestrator | 00:01:15.081 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.081101 | orchestrator | 00:01:15.081 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.081137 | orchestrator | 00:01:15.081 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.081174 | orchestrator | 00:01:15.081 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.081214 | orchestrator | 00:01:15.081 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.081244 | orchestrator | 00:01:15.081 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.081285 | orchestrator | 00:01:15.081 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.081321 | orchestrator | 00:01:15.081 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.081356 | orchestrator | 00:01:15.081 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.081392 | orchestrator | 00:01:15.081 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.081412 | orchestrator | 00:01:15.081 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.081441 | orchestrator | 00:01:15.081 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.081449 | orchestrator | 00:01:15.081 STDOUT terraform:  } 2025-05-04 00:01:15.081471 | orchestrator | 00:01:15.081 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.081500 | orchestrator | 00:01:15.081 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.081515 | orchestrator | 00:01:15.081 STDOUT terraform:  } 2025-05-04 00:01:15.081539 | orchestrator | 00:01:15.081 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.081554 | orchestrator | 00:01:15.081 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.081579 | orchestrator | 00:01:15.081 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-04 00:01:15.081608 | orchestrator | 00:01:15.081 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.081623 | orchestrator | 00:01:15.081 STDOUT terraform:  } 2025-05-04 00:01:15.081631 | orchestrator | 00:01:15.081 STDOUT terraform:  } 2025-05-04 00:01:15.081679 | orchestrator | 00:01:15.081 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-04 00:01:15.081723 | orchestrator | 00:01:15.081 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-04 00:01:15.081758 | orchestrator | 00:01:15.081 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.081794 | orchestrator | 00:01:15.081 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.081830 | orchestrator | 00:01:15.081 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.081866 | orchestrator | 00:01:15.081 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.081902 | orchestrator | 00:01:15.081 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.081939 | orchestrator | 00:01:15.081 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.081975 | orchestrator | 00:01:15.081 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.082046 | orchestrator | 00:01:15.081 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.082069 | orchestrator | 00:01:15.082 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.082106 | orchestrator | 00:01:15.082 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.082141 | orchestrator | 00:01:15.082 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.082176 | orchestrator | 00:01:15.082 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.082212 | orchestrator | 00:01:15.082 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.082248 | orchestrator | 00:01:15.082 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.082284 | orchestrator | 00:01:15.082 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.082320 | orchestrator | 00:01:15.082 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.082340 | orchestrator | 00:01:15.082 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.082368 | orchestrator | 00:01:15.082 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.082376 | orchestrator | 00:01:15.082 STDOUT terraform:  } 2025-05-04 00:01:15.082399 | orchestrator | 00:01:15.082 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.082429 | orchestrator | 00:01:15.082 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-04 00:01:15.082436 | orchestrator | 00:01:15.082 STDOUT terraform:  } 2025-05-04 00:01:15.082458 | orchestrator | 00:01:15.082 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.082486 | orchestrator | 00:01:15.082 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.082493 | orchestrator | 00:01:15.082 STDOUT terraform:  } 2025-05-04 00:01:15.082517 | orchestrator | 00:01:15.082 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.082545 | orchestrator | 00:01:15.082 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-04 00:01:15.082552 | orchestrator | 00:01:15.082 STDOUT terraform:  } 2025-05-04 00:01:15.082580 | orchestrator | 00:01:15.082 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.082587 | orchestrator | 00:01:15.082 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.082615 | orchestrator | 00:01:15.082 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-04 00:01:15.082644 | orchestrator | 00:01:15.082 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.082659 | orchestrator | 00:01:15.082 STDOUT terraform:  } 2025-05-04 00:01:15.082667 | orchestrator | 00:01:15.082 STDOUT terraform:  } 2025-05-04 00:01:15.082717 | orchestrator | 00:01:15.082 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-04 00:01:15.082763 | orchestrator | 00:01:15.082 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-04 00:01:15.082799 | orchestrator | 00:01:15.082 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.082835 | orchestrator | 00:01:15.082 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.082871 | orchestrator | 00:01:15.082 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.082909 | orchestrator | 00:01:15.082 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.082943 | orchestrator | 00:01:15.082 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.082979 | orchestrator | 00:01:15.082 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.083044 | orchestrator | 00:01:15.082 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.083072 | orchestrator | 00:01:15.083 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.083108 | orchestrator | 00:01:15.083 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.083145 | orchestrator | 00:01:15.083 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.083182 | orchestrator | 00:01:15.083 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.083217 | orchestrator | 00:01:15.083 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.083253 | orchestrator | 00:01:15.083 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.083289 | orchestrator | 00:01:15.083 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.083324 | orchestrator | 00:01:15.083 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.083361 | orchestrator | 00:01:15.083 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.083380 | orchestrator | 00:01:15.083 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.083411 | orchestrator | 00:01:15.083 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.083418 | orchestrator | 00:01:15.083 STDOUT terraform:  } 2025-05-04 00:01:15.083441 | orchestrator | 00:01:15.083 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.083470 | orchestrator | 00:01:15.083 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-04 00:01:15.083477 | orchestrator | 00:01:15.083 STDOUT terraform:  } 2025-05-04 00:01:15.083499 | orchestrator | 00:01:15.083 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.083528 | orchestrator | 00:01:15.083 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.083536 | orchestrator | 00:01:15.083 STDOUT terraform:  } 2025-05-04 00:01:15.083557 | orchestrator | 00:01:15.083 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.083587 | orchestrator | 00:01:15.083 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-04 00:01:15.083594 | orchestrator | 00:01:15.083 STDOUT terraform:  } 2025-05-04 00:01:15.083621 | orchestrator | 00:01:15.083 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.083635 | orchestrator | 00:01:15.083 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.083661 | orchestrator | 00:01:15.083 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-04 00:01:15.083744 | orchestrator | 00:01:15.083 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.083768 | orchestrator | 00:01:15.083 STDOUT terraform:  } 2025-05-04 00:01:15.083774 | orchestrator | 00:01:15.083 STDOUT terraform:  } 2025-05-04 00:01:15.083780 | orchestrator | 00:01:15.083 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-04 00:01:15.083814 | orchestrator | 00:01:15.083 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-04 00:01:15.083852 | orchestrator | 00:01:15.083 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.083887 | orchestrator | 00:01:15.083 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.083922 | orchestrator | 00:01:15.083 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.083959 | orchestrator | 00:01:15.083 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.083995 | orchestrator | 00:01:15.083 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.084043 | orchestrator | 00:01:15.083 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.084079 | orchestrator | 00:01:15.084 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.084115 | orchestrator | 00:01:15.084 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.084152 | orchestrator | 00:01:15.084 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.084187 | orchestrator | 00:01:15.084 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.084225 | orchestrator | 00:01:15.084 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.084262 | orchestrator | 00:01:15.084 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.084297 | orchestrator | 00:01:15.084 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.084338 | orchestrator | 00:01:15.084 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.084369 | orchestrator | 00:01:15.084 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.084405 | orchestrator | 00:01:15.084 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.084426 | orchestrator | 00:01:15.084 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.084456 | orchestrator | 00:01:15.084 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.084463 | orchestrator | 00:01:15.084 STDOUT terraform:  } 2025-05-04 00:01:15.084486 | orchestrator | 00:01:15.084 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.084516 | orchestrator | 00:01:15.084 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-04 00:01:15.084524 | orchestrator | 00:01:15.084 STDOUT terraform:  } 2025-05-04 00:01:15.084546 | orchestrator | 00:01:15.084 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.084575 | orchestrator | 00:01:15.084 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.084582 | orchestrator | 00:01:15.084 STDOUT terraform:  } 2025-05-04 00:01:15.084604 | orchestrator | 00:01:15.084 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.084633 | orchestrator | 00:01:15.084 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-04 00:01:15.084640 | orchestrator | 00:01:15.084 STDOUT terraform:  } 2025-05-04 00:01:15.084666 | orchestrator | 00:01:15.084 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.084674 | orchestrator | 00:01:15.084 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.084704 | orchestrator | 00:01:15.084 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-04 00:01:15.084732 | orchestrator | 00:01:15.084 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.084739 | orchestrator | 00:01:15.084 STDOUT terraform:  } 2025-05-04 00:01:15.084756 | orchestrator | 00:01:15.084 STDOUT terraform:  } 2025-05-04 00:01:15.084802 | orchestrator | 00:01:15.084 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-04 00:01:15.084847 | orchestrator | 00:01:15.084 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-04 00:01:15.084884 | orchestrator | 00:01:15.084 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.084920 | orchestrator | 00:01:15.084 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.084956 | orchestrator | 00:01:15.084 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.084993 | orchestrator | 00:01:15.084 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.085045 | orchestrator | 00:01:15.084 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.085076 | orchestrator | 00:01:15.085 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.085113 | orchestrator | 00:01:15.085 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.085150 | orchestrator | 00:01:15.085 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.085187 | orchestrator | 00:01:15.085 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.085223 | orchestrator | 00:01:15.085 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.085259 | orchestrator | 00:01:15.085 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.085294 | orchestrator | 00:01:15.085 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.085330 | orchestrator | 00:01:15.085 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.085367 | orchestrator | 00:01:15.085 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.085403 | orchestrator | 00:01:15.085 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.085439 | orchestrator | 00:01:15.085 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.085459 | orchestrator | 00:01:15.085 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.085489 | orchestrator | 00:01:15.085 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.085497 | orchestrator | 00:01:15.085 STDOUT terraform:  } 2025-05-04 00:01:15.085518 | orchestrator | 00:01:15.085 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.085549 | orchestrator | 00:01:15.085 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-04 00:01:15.085556 | orchestrator | 00:01:15.085 STDOUT terraform:  } 2025-05-04 00:01:15.085640 | orchestrator | 00:01:15.085 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.085647 | orchestrator | 00:01:15.085 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.085652 | orchestrator | 00:01:15.085 STDOUT terraform:  } 2025-05-04 00:01:15.085664 | orchestrator | 00:01:15.085 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.085671 | orchestrator | 00:01:15.085 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-04 00:01:15.085688 | orchestrator | 00:01:15.085 STDOUT terraform:  } 2025-05-04 00:01:15.085694 | orchestrator | 00:01:15.085 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.085700 | orchestrator | 00:01:15.085 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.085716 | orchestrator | 00:01:15.085 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-04 00:01:15.086548 | orchestrator | 00:01:15.085 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.086567 | orchestrator | 00:01:15.086 STDOUT terraform:  } 2025-05-04 00:01:15.086575 | orchestrator | 00:01:15.086 STDOUT terraform:  } 2025-05-04 00:01:15.086625 | orchestrator | 00:01:15.086 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-04 00:01:15.086675 | orchestrator | 00:01:15.086 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-04 00:01:15.086712 | orchestrator | 00:01:15.086 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.086750 | orchestrator | 00:01:15.086 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.086785 | orchestrator | 00:01:15.086 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.086823 | orchestrator | 00:01:15.086 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.086867 | orchestrator | 00:01:15.086 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.086903 | orchestrator | 00:01:15.086 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.086939 | orchestrator | 00:01:15.086 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.086982 | orchestrator | 00:01:15.086 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.087030 | orchestrator | 00:01:15.086 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.087065 | orchestrator | 00:01:15.087 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.087100 | orchestrator | 00:01:15.087 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.087136 | orchestrator | 00:01:15.087 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.087174 | orchestrator | 00:01:15.087 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.087211 | orchestrator | 00:01:15.087 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.087245 | orchestrator | 00:01:15.087 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.087299 | orchestrator | 00:01:15.087 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.087319 | orchestrator | 00:01:15.087 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.087348 | orchestrator | 00:01:15.087 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.087360 | orchestrator | 00:01:15.087 STDOUT terraform:  } 2025-05-04 00:01:15.087377 | orchestrator | 00:01:15.087 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.087403 | orchestrator | 00:01:15.087 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-04 00:01:15.087411 | orchestrator | 00:01:15.087 STDOUT terraform:  } 2025-05-04 00:01:15.087433 | orchestrator | 00:01:15.087 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.087463 | orchestrator | 00:01:15.087 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.087469 | orchestrator | 00:01:15.087 STDOUT terraform:  } 2025-05-04 00:01:15.087493 | orchestrator | 00:01:15.087 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.087522 | orchestrator | 00:01:15.087 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-04 00:01:15.087537 | orchestrator | 00:01:15.087 STDOUT terraform:  } 2025-05-04 00:01:15.087561 | orchestrator | 00:01:15.087 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.087569 | orchestrator | 00:01:15.087 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.087597 | orchestrator | 00:01:15.087 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-04 00:01:15.087627 | orchestrator | 00:01:15.087 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.087634 | orchestrator | 00:01:15.087 STDOUT terraform:  } 2025-05-04 00:01:15.087650 | orchestrator | 00:01:15.087 STDOUT terraform:  } 2025-05-04 00:01:15.087696 | orchestrator | 00:01:15.087 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-04 00:01:15.087739 | orchestrator | 00:01:15.087 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-04 00:01:15.087775 | orchestrator | 00:01:15.087 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.087811 | orchestrator | 00:01:15.087 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-04 00:01:15.087847 | orchestrator | 00:01:15.087 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-04 00:01:15.087883 | orchestrator | 00:01:15.087 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.087919 | orchestrator | 00:01:15.087 STDOUT terraform:  + device_id = (known after apply) 2025-05-04 00:01:15.087956 | orchestrator | 00:01:15.087 STDOUT terraform:  + device_owner = (known after apply) 2025-05-04 00:01:15.087991 | orchestrator | 00:01:15.087 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-04 00:01:15.088039 | orchestrator | 00:01:15.087 STDOUT terraform:  + dns_name = (known after apply) 2025-05-04 00:01:15.088076 | orchestrator | 00:01:15.088 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.088112 | orchestrator | 00:01:15.088 STDOUT terraform:  + mac_address = (known after apply) 2025-05-04 00:01:15.088147 | orchestrator | 00:01:15.088 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.088182 | orchestrator | 00:01:15.088 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-04 00:01:15.088218 | orchestrator | 00:01:15.088 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-04 00:01:15.088254 | orchestrator | 00:01:15.088 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.088290 | orchestrator | 00:01:15.088 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-04 00:01:15.088326 | orchestrator | 00:01:15.088 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.088347 | orchestrator | 00:01:15.088 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.088376 | orchestrator | 00:01:15.088 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-04 00:01:15.088384 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088405 | orchestrator | 00:01:15.088 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.088434 | orchestrator | 00:01:15.088 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-04 00:01:15.088441 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088464 | orchestrator | 00:01:15.088 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.088494 | orchestrator | 00:01:15.088 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-04 00:01:15.088501 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088523 | orchestrator | 00:01:15.088 STDOUT terraform:  + allowed_address_pairs { 2025-05-04 00:01:15.088552 | orchestrator | 00:01:15.088 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-04 00:01:15.088560 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088586 | orchestrator | 00:01:15.088 STDOUT terraform:  + binding (known after apply) 2025-05-04 00:01:15.088595 | orchestrator | 00:01:15.088 STDOUT terraform:  + fixed_ip { 2025-05-04 00:01:15.088623 | orchestrator | 00:01:15.088 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-04 00:01:15.088659 | orchestrator | 00:01:15.088 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.088667 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088674 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088721 | orchestrator | 00:01:15.088 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-04 00:01:15.088772 | orchestrator | 00:01:15.088 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-04 00:01:15.088788 | orchestrator | 00:01:15.088 STDOUT terraform:  + force_destroy = false 2025-05-04 00:01:15.088818 | orchestrator | 00:01:15.088 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.088846 | orchestrator | 00:01:15.088 STDOUT terraform:  + port_id = (known after apply) 2025-05-04 00:01:15.088875 | orchestrator | 00:01:15.088 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.088904 | orchestrator | 00:01:15.088 STDOUT terraform:  + router_id = (known after apply) 2025-05-04 00:01:15.088933 | orchestrator | 00:01:15.088 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-04 00:01:15.088941 | orchestrator | 00:01:15.088 STDOUT terraform:  } 2025-05-04 00:01:15.088963 | orchestrator | 00:01:15.088 STDOUT terraform:  # openstack_networking_route 2025-05-04 00:01:15.089083 | orchestrator | 00:01:15.088 STDOUT terraform: r_v2.router will be created 2025-05-04 00:01:15.089098 | orchestrator | 00:01:15.089 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-04 00:01:15.089105 | orchestrator | 00:01:15.089 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-04 00:01:15.089139 | orchestrator | 00:01:15.089 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.089162 | orchestrator | 00:01:15.089 STDOUT terraform:  + availability_zone_hints = [ 2025-05-04 00:01:15.089178 | orchestrator | 00:01:15.089 STDOUT terraform:  + "nova", 2025-05-04 00:01:15.089186 | orchestrator | 00:01:15.089 STDOUT terraform:  ] 2025-05-04 00:01:15.089257 | orchestrator | 00:01:15.089 STDOUT terraform:  + distributed = (known after apply) 2025-05-04 00:01:15.089292 | orchestrator | 00:01:15.089 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-04 00:01:15.089341 | orchestrator | 00:01:15.089 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-04 00:01:15.089378 | orchestrator | 00:01:15.089 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.089408 | orchestrator | 00:01:15.089 STDOUT terraform:  + name = "testbed" 2025-05-04 00:01:15.089445 | orchestrator | 00:01:15.089 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.089480 | orchestrator | 00:01:15.089 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.089509 | orchestrator | 00:01:15.089 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-04 00:01:15.089517 | orchestrator | 00:01:15.089 STDOUT terraform:  } 2025-05-04 00:01:15.089572 | orchestrator | 00:01:15.089 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-04 00:01:15.089626 | orchestrator | 00:01:15.089 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-04 00:01:15.089645 | orchestrator | 00:01:15.089 STDOUT terraform:  + description = "ssh" 2025-05-04 00:01:15.089670 | orchestrator | 00:01:15.089 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.089691 | orchestrator | 00:01:15.089 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.089724 | orchestrator | 00:01:15.089 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.089743 | orchestrator | 00:01:15.089 STDOUT terraform:  + port_range_max = 22 2025-05-04 00:01:15.089763 | orchestrator | 00:01:15.089 STDOUT terraform:  + port_range_min = 22 2025-05-04 00:01:15.089784 | orchestrator | 00:01:15.089 STDOUT terraform:  + protocol = "tcp" 2025-05-04 00:01:15.089815 | orchestrator | 00:01:15.089 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.089845 | orchestrator | 00:01:15.089 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.089869 | orchestrator | 00:01:15.089 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.089899 | orchestrator | 00:01:15.089 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.089930 | orchestrator | 00:01:15.089 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.089937 | orchestrator | 00:01:15.089 STDOUT terraform:  } 2025-05-04 00:01:15.089994 | orchestrator | 00:01:15.089 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-04 00:01:15.090073 | orchestrator | 00:01:15.089 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-04 00:01:15.090097 | orchestrator | 00:01:15.090 STDOUT terraform:  + description = "wireguard" 2025-05-04 00:01:15.090123 | orchestrator | 00:01:15.090 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.090145 | orchestrator | 00:01:15.090 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.090177 | orchestrator | 00:01:15.090 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.090198 | orchestrator | 00:01:15.090 STDOUT terraform:  + port_range_max = 51820 2025-05-04 00:01:15.090219 | orchestrator | 00:01:15.090 STDOUT terraform:  + port_range_min = 51820 2025-05-04 00:01:15.090240 | orchestrator | 00:01:15.090 STDOUT terraform:  + protocol = "udp" 2025-05-04 00:01:15.090271 | orchestrator | 00:01:15.090 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.090301 | orchestrator | 00:01:15.090 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.090327 | orchestrator | 00:01:15.090 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.090357 | orchestrator | 00:01:15.090 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.090387 | orchestrator | 00:01:15.090 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.090394 | orchestrator | 00:01:15.090 STDOUT terraform:  } 2025-05-04 00:01:15.090451 | orchestrator | 00:01:15.090 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-04 00:01:15.090504 | orchestrator | 00:01:15.090 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-04 00:01:15.090528 | orchestrator | 00:01:15.090 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.090549 | orchestrator | 00:01:15.090 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.090580 | orchestrator | 00:01:15.090 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.090601 | orchestrator | 00:01:15.090 STDOUT terraform:  + protocol = "tcp" 2025-05-04 00:01:15.090633 | orchestrator | 00:01:15.090 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.090663 | orchestrator | 00:01:15.090 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.090693 | orchestrator | 00:01:15.090 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-04 00:01:15.090723 | orchestrator | 00:01:15.090 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.090755 | orchestrator | 00:01:15.090 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.090762 | orchestrator | 00:01:15.090 STDOUT terraform:  } 2025-05-04 00:01:15.090817 | orchestrator | 00:01:15.090 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-04 00:01:15.090870 | orchestrator | 00:01:15.090 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-04 00:01:15.090893 | orchestrator | 00:01:15.090 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.090914 | orchestrator | 00:01:15.090 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.090946 | orchestrator | 00:01:15.090 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.090966 | orchestrator | 00:01:15.090 STDOUT terraform:  + protocol = "udp" 2025-05-04 00:01:15.091048 | orchestrator | 00:01:15.090 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.091077 | orchestrator | 00:01:15.090 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.091087 | orchestrator | 00:01:15.091 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-04 00:01:15.091103 | orchestrator | 00:01:15.091 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.091134 | orchestrator | 00:01:15.091 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.091141 | orchestrator | 00:01:15.091 STDOUT terraform:  } 2025-05-04 00:01:15.091196 | orchestrator | 00:01:15.091 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-04 00:01:15.091249 | orchestrator | 00:01:15.091 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-04 00:01:15.091273 | orchestrator | 00:01:15.091 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.091293 | orchestrator | 00:01:15.091 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.091326 | orchestrator | 00:01:15.091 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.091347 | orchestrator | 00:01:15.091 STDOUT terraform:  + protocol = "icmp" 2025-05-04 00:01:15.091377 | orchestrator | 00:01:15.091 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.091407 | orchestrator | 00:01:15.091 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.091432 | orchestrator | 00:01:15.091 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.091463 | orchestrator | 00:01:15.091 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.091493 | orchestrator | 00:01:15.091 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.091500 | orchestrator | 00:01:15.091 STDOUT terraform:  } 2025-05-04 00:01:15.091555 | orchestrator | 00:01:15.091 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-04 00:01:15.091607 | orchestrator | 00:01:15.091 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-04 00:01:15.091629 | orchestrator | 00:01:15.091 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.091650 | orchestrator | 00:01:15.091 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.091682 | orchestrator | 00:01:15.091 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.091703 | orchestrator | 00:01:15.091 STDOUT terraform:  + protocol = "tcp" 2025-05-04 00:01:15.091733 | orchestrator | 00:01:15.091 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.091764 | orchestrator | 00:01:15.091 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.091790 | orchestrator | 00:01:15.091 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.091820 | orchestrator | 00:01:15.091 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.091849 | orchestrator | 00:01:15.091 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.091857 | orchestrator | 00:01:15.091 STDOUT terraform:  } 2025-05-04 00:01:15.091909 | orchestrator | 00:01:15.091 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-04 00:01:15.091961 | orchestrator | 00:01:15.091 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-04 00:01:15.091985 | orchestrator | 00:01:15.091 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.092006 | orchestrator | 00:01:15.091 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.092047 | orchestrator | 00:01:15.092 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.092068 | orchestrator | 00:01:15.092 STDOUT terraform:  + protocol = "udp" 2025-05-04 00:01:15.092099 | orchestrator | 00:01:15.092 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.092129 | orchestrator | 00:01:15.092 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.092155 | orchestrator | 00:01:15.092 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.092187 | orchestrator | 00:01:15.092 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.092217 | orchestrator | 00:01:15.092 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.092225 | orchestrator | 00:01:15.092 STDOUT terraform:  } 2025-05-04 00:01:15.092277 | orchestrator | 00:01:15.092 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-04 00:01:15.092328 | orchestrator | 00:01:15.092 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-04 00:01:15.092351 | orchestrator | 00:01:15.092 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.092372 | orchestrator | 00:01:15.092 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.092404 | orchestrator | 00:01:15.092 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.092425 | orchestrator | 00:01:15.092 STDOUT terraform:  + protocol = "icmp" 2025-05-04 00:01:15.092456 | orchestrator | 00:01:15.092 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.092486 | orchestrator | 00:01:15.092 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.092512 | orchestrator | 00:01:15.092 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.092542 | orchestrator | 00:01:15.092 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.092571 | orchestrator | 00:01:15.092 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.092579 | orchestrator | 00:01:15.092 STDOUT terraform:  } 2025-05-04 00:01:15.092630 | orchestrator | 00:01:15.092 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-04 00:01:15.092680 | orchestrator | 00:01:15.092 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-04 00:01:15.092700 | orchestrator | 00:01:15.092 STDOUT terraform:  + description = "vrrp" 2025-05-04 00:01:15.092724 | orchestrator | 00:01:15.092 STDOUT terraform:  + direction = "ingress" 2025-05-04 00:01:15.092745 | orchestrator | 00:01:15.092 STDOUT terraform:  + ethertype = "IPv4" 2025-05-04 00:01:15.092777 | orchestrator | 00:01:15.092 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.092798 | orchestrator | 00:01:15.092 STDOUT terraform:  + protocol = "112" 2025-05-04 00:01:15.092829 | orchestrator | 00:01:15.092 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.092859 | orchestrator | 00:01:15.092 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-04 00:01:15.092883 | orchestrator | 00:01:15.092 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-04 00:01:15.092917 | orchestrator | 00:01:15.092 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-04 00:01:15.092943 | orchestrator | 00:01:15.092 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.092950 | orchestrator | 00:01:15.092 STDOUT terraform:  } 2025-05-04 00:01:15.093002 | orchestrator | 00:01:15.092 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-04 00:01:15.093070 | orchestrator | 00:01:15.092 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-04 00:01:15.093097 | orchestrator | 00:01:15.093 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.093131 | orchestrator | 00:01:15.093 STDOUT terraform:  + description = "management security group" 2025-05-04 00:01:15.093160 | orchestrator | 00:01:15.093 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.093189 | orchestrator | 00:01:15.093 STDOUT terraform:  + name = "testbed-management" 2025-05-04 00:01:15.093217 | orchestrator | 00:01:15.093 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.093246 | orchestrator | 00:01:15.093 STDOUT terraform:  + stateful = (known after apply) 2025-05-04 00:01:15.093274 | orchestrator | 00:01:15.093 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.093282 | orchestrator | 00:01:15.093 STDOUT terraform:  } 2025-05-04 00:01:15.093330 | orchestrator | 00:01:15.093 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-04 00:01:15.093375 | orchestrator | 00:01:15.093 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-04 00:01:15.093404 | orchestrator | 00:01:15.093 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.093432 | orchestrator | 00:01:15.093 STDOUT terraform:  + description = "node security group" 2025-05-04 00:01:15.093460 | orchestrator | 00:01:15.093 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.093485 | orchestrator | 00:01:15.093 STDOUT terraform:  + name = "testbed-node" 2025-05-04 00:01:15.093514 | orchestrator | 00:01:15.093 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.093542 | orchestrator | 00:01:15.093 STDOUT terraform:  + stateful = (known after apply) 2025-05-04 00:01:15.093570 | orchestrator | 00:01:15.093 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.093578 | orchestrator | 00:01:15.093 STDOUT terraform:  } 2025-05-04 00:01:15.093626 | orchestrator | 00:01:15.093 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-04 00:01:15.093670 | orchestrator | 00:01:15.093 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-04 00:01:15.093701 | orchestrator | 00:01:15.093 STDOUT terraform:  + all_tags = (known after apply) 2025-05-04 00:01:15.093732 | orchestrator | 00:01:15.093 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-04 00:01:15.093753 | orchestrator | 00:01:15.093 STDOUT terraform:  + dns_nameservers = [ 2025-05-04 00:01:15.093769 | orchestrator | 00:01:15.093 STDOUT terraform:  + "8.8.8.8", 2025-05-04 00:01:15.093786 | orchestrator | 00:01:15.093 STDOUT terraform:  + "9.9.9.9", 2025-05-04 00:01:15.093793 | orchestrator | 00:01:15.093 STDOUT terraform:  ] 2025-05-04 00:01:15.093818 | orchestrator | 00:01:15.093 STDOUT terraform:  + enable_dhcp = true 2025-05-04 00:01:15.093848 | orchestrator | 00:01:15.093 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-04 00:01:15.093881 | orchestrator | 00:01:15.093 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.093901 | orchestrator | 00:01:15.093 STDOUT terraform:  + ip_version = 4 2025-05-04 00:01:15.093930 | orchestrator | 00:01:15.093 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-04 00:01:15.093962 | orchestrator | 00:01:15.093 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-04 00:01:15.093998 | orchestrator | 00:01:15.093 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-04 00:01:15.094058 | orchestrator | 00:01:15.093 STDOUT terraform:  + network_id = (known after apply) 2025-05-04 00:01:15.094068 | orchestrator | 00:01:15.094 STDOUT terraform:  + no_gateway = false 2025-05-04 00:01:15.094097 | orchestrator | 00:01:15.094 STDOUT terraform:  + region = (known after apply) 2025-05-04 00:01:15.094127 | orchestrator | 00:01:15.094 STDOUT terraform:  + service_types = (known after apply) 2025-05-04 00:01:15.094158 | orchestrator | 00:01:15.094 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-04 00:01:15.094176 | orchestrator | 00:01:15.094 STDOUT terraform:  + allocation_pool { 2025-05-04 00:01:15.094201 | orchestrator | 00:01:15.094 STDOUT terraform:  + end = "192.168.31.250" 2025-05-04 00:01:15.094225 | orchestrator | 00:01:15.094 STDOUT terraform:  + start = "192.168.31.200" 2025-05-04 00:01:15.094233 | orchestrator | 00:01:15.094 STDOUT terraform:  } 2025-05-04 00:01:15.094248 | orchestrator | 00:01:15.094 STDOUT terraform:  } 2025-05-04 00:01:15.094275 | orchestrator | 00:01:15.094 STDOUT terraform:  # terraform_data.image will be created 2025-05-04 00:01:15.094299 | orchestrator | 00:01:15.094 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-04 00:01:15.094324 | orchestrator | 00:01:15.094 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.094337 | orchestrator | 00:01:15.094 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-04 00:01:15.094363 | orchestrator | 00:01:15.094 STDOUT terraform:  + output = (known after apply) 2025-05-04 00:01:15.094370 | orchestrator | 00:01:15.094 STDOUT terraform:  } 2025-05-04 00:01:15.094402 | orchestrator | 00:01:15.094 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-04 00:01:15.094431 | orchestrator | 00:01:15.094 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-04 00:01:15.094456 | orchestrator | 00:01:15.094 STDOUT terraform:  + id = (known after apply) 2025-05-04 00:01:15.094476 | orchestrator | 00:01:15.094 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-04 00:01:15.094500 | orchestrator | 00:01:15.094 STDOUT terraform:  + output = (known after apply) 2025-05-04 00:01:15.094508 | orchestrator | 00:01:15.094 STDOUT terraform:  } 2025-05-04 00:01:15.094540 | orchestrator | 00:01:15.094 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-05-04 00:01:15.094547 | orchestrator | 00:01:15.094 STDOUT terraform: Changes to Outputs: 2025-05-04 00:01:15.094576 | orchestrator | 00:01:15.094 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-04 00:01:15.094603 | orchestrator | 00:01:15.094 STDOUT terraform:  + private_key = (sensitive value) 2025-05-04 00:01:15.307822 | orchestrator | 00:01:15.307 STDOUT terraform: terraform_data.image: Creating... 2025-05-04 00:01:15.308823 | orchestrator | 00:01:15.307 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-04 00:01:15.308884 | orchestrator | 00:01:15.307 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=32db0819-9a35-33a9-6957-45d8751774e5] 2025-05-04 00:01:15.308911 | orchestrator | 00:01:15.308 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4235762b-40c9-ce4b-b9d1-63f00e003c1b] 2025-05-04 00:01:15.325374 | orchestrator | 00:01:15.325 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-04 00:01:15.325661 | orchestrator | 00:01:15.325 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-04 00:01:15.334482 | orchestrator | 00:01:15.334 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-04 00:01:15.340681 | orchestrator | 00:01:15.340 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-04 00:01:15.342187 | orchestrator | 00:01:15.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-05-04 00:01:15.342231 | orchestrator | 00:01:15.342 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-05-04 00:01:15.342832 | orchestrator | 00:01:15.342 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-05-04 00:01:15.343375 | orchestrator | 00:01:15.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-05-04 00:01:15.346761 | orchestrator | 00:01:15.346 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-05-04 00:01:15.347743 | orchestrator | 00:01:15.347 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-04 00:01:15.805465 | orchestrator | 00:01:15.805 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-04 00:01:15.813674 | orchestrator | 00:01:15.813 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-04 00:01:15.818226 | orchestrator | 00:01:15.816 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-04 00:01:15.827378 | orchestrator | 00:01:15.827 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-05-04 00:01:16.122378 | orchestrator | 00:01:16.121 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-04 00:01:16.130280 | orchestrator | 00:01:16.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-04 00:01:21.128121 | orchestrator | 00:01:21.127 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=04504049-bc5c-4545-ad1b-313c0981be02] 2025-05-04 00:01:21.134627 | orchestrator | 00:01:21.134 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-04 00:01:25.342434 | orchestrator | 00:01:25.341 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-05-04 00:01:25.343315 | orchestrator | 00:01:25.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-05-04 00:01:25.344333 | orchestrator | 00:01:25.344 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-05-04 00:01:25.344645 | orchestrator | 00:01:25.344 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-05-04 00:01:25.347660 | orchestrator | 00:01:25.347 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-05-04 00:01:25.348855 | orchestrator | 00:01:25.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-04 00:01:25.814999 | orchestrator | 00:01:25.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-04 00:01:25.828175 | orchestrator | 00:01:25.827 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-05-04 00:01:25.925679 | orchestrator | 00:01:25.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=5892b7dc-a458-477e-893f-beef3eb00cef] 2025-05-04 00:01:25.938326 | orchestrator | 00:01:25.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=9737e10e-3051-48df-9cd6-5b074c161c93] 2025-05-04 00:01:25.946137 | orchestrator | 00:01:25.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-04 00:01:25.949757 | orchestrator | 00:01:25.949 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=e986bc1a-3638-41fe-8757-5755b3d430d7] 2025-05-04 00:01:25.951220 | orchestrator | 00:01:25.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=12665b64-aca9-4755-9dee-a26132b82b0a] 2025-05-04 00:01:25.951480 | orchestrator | 00:01:25.951 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-05-04 00:01:25.955354 | orchestrator | 00:01:25.955 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-04 00:01:25.957695 | orchestrator | 00:01:25.957 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-04 00:01:25.971057 | orchestrator | 00:01:25.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=fce9c480-0ce5-4d2c-b3f0-14cdf3862254] 2025-05-04 00:01:25.973737 | orchestrator | 00:01:25.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=a335202a-bc46-4a1a-9390-24712f04f8da] 2025-05-04 00:01:25.976005 | orchestrator | 00:01:25.975 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-04 00:01:25.978707 | orchestrator | 00:01:25.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-05-04 00:01:26.011993 | orchestrator | 00:01:26.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=41a828c4-aadc-4592-9baf-1de326a5c86d] 2025-05-04 00:01:26.018681 | orchestrator | 00:01:26.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-05-04 00:01:26.046860 | orchestrator | 00:01:26.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=228c4a8e-d362-4d42-8ea3-c65a43234221] 2025-05-04 00:01:26.054147 | orchestrator | 00:01:26.053 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-04 00:01:26.131244 | orchestrator | 00:01:26.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-04 00:01:26.315536 | orchestrator | 00:01:26.315 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=44aea083-53c7-4db3-b476-f0e15c33499e] 2025-05-04 00:01:26.325849 | orchestrator | 00:01:26.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-04 00:01:31.137527 | orchestrator | 00:01:31.137 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-04 00:01:31.288177 | orchestrator | 00:01:31.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=887667df-8a23-4f97-9ff0-05cbc5f29729] 2025-05-04 00:01:31.300172 | orchestrator | 00:01:31.299 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-04 00:01:35.942657 | orchestrator | 00:01:35.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-04 00:01:35.952782 | orchestrator | 00:01:35.952 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-05-04 00:01:35.956411 | orchestrator | 00:01:35.955 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-04 00:01:35.958420 | orchestrator | 00:01:35.958 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-04 00:01:35.976917 | orchestrator | 00:01:35.976 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-04 00:01:35.979770 | orchestrator | 00:01:35.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-05-04 00:01:36.020408 | orchestrator | 00:01:36.019 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-05-04 00:01:36.055604 | orchestrator | 00:01:36.055 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-04 00:01:36.158531 | orchestrator | 00:01:36.157 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=e6952e91-4add-41f4-9682-2820842eaefb] 2025-05-04 00:01:36.168948 | orchestrator | 00:01:36.168 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=3434c0cd-230e-4587-95bc-9baf80b8630f] 2025-05-04 00:01:36.174528 | orchestrator | 00:01:36.174 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-04 00:01:36.177377 | orchestrator | 00:01:36.177 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-04 00:01:36.196911 | orchestrator | 00:01:36.196 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=f0e304d0-da68-45fd-ab80-c7aa1a870cfc] 2025-05-04 00:01:36.201845 | orchestrator | 00:01:36.201 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-04 00:01:36.212621 | orchestrator | 00:01:36.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=4238a5d3-6f9a-453b-8646-1f6e7fcf7783] 2025-05-04 00:01:36.218252 | orchestrator | 00:01:36.217 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-04 00:01:36.219942 | orchestrator | 00:01:36.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=10380154-7d57-4db6-80c5-fea690e2f123] 2025-05-04 00:01:36.234442 | orchestrator | 00:01:36.234 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-04 00:01:36.236309 | orchestrator | 00:01:36.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=843cf234-6aef-404a-a841-1f1650f95beb] 2025-05-04 00:01:36.247617 | orchestrator | 00:01:36.247 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-04 00:01:36.251114 | orchestrator | 00:01:36.250 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=83114fc5ff2954518e2a10bf6dcbbde8aaa2bdc4] 2025-05-04 00:01:36.257939 | orchestrator | 00:01:36.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=f4ddea5b-b8af-4ee0-9445-5b6c1bebc06b] 2025-05-04 00:01:36.259242 | orchestrator | 00:01:36.259 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-04 00:01:36.262087 | orchestrator | 00:01:36.261 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-04 00:01:36.266996 | orchestrator | 00:01:36.266 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=ec616bd030324d0cb8de0b2393a2cab1a1014794] 2025-05-04 00:01:36.279108 | orchestrator | 00:01:36.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=9f40ab83-2cd9-4bf4-a5ce-fe50f63fc73a] 2025-05-04 00:01:36.327147 | orchestrator | 00:01:36.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-04 00:01:36.659767 | orchestrator | 00:01:36.659 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47] 2025-05-04 00:01:41.301346 | orchestrator | 00:01:41.300 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-04 00:01:41.621615 | orchestrator | 00:01:41.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=86e714b2-7a79-4481-bc1e-8874c98b655d] 2025-05-04 00:01:42.157413 | orchestrator | 00:01:42.157 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=5fe55e83-9295-4ceb-b46c-9b7290f2601f] 2025-05-04 00:01:42.166937 | orchestrator | 00:01:42.166 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-04 00:01:46.176114 | orchestrator | 00:01:46.175 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-04 00:01:46.178184 | orchestrator | 00:01:46.177 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-04 00:01:46.202493 | orchestrator | 00:01:46.202 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-04 00:01:46.218905 | orchestrator | 00:01:46.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-04 00:01:46.236367 | orchestrator | 00:01:46.236 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-04 00:01:46.515347 | orchestrator | 00:01:46.514 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=04c131bc-fe2e-4a5c-b435-65085a31af09] 2025-05-04 00:01:46.549962 | orchestrator | 00:01:46.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=5ce6454b-2fbf-482a-841d-170d05af2df9] 2025-05-04 00:01:46.608341 | orchestrator | 00:01:46.607 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=122e7cd6-3312-4e09-8ed8-35b7e29a9b06] 2025-05-04 00:01:46.610959 | orchestrator | 00:01:46.610 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=13bb42a8-f5c9-4e2d-b57e-2b129d56f15c] 2025-05-04 00:01:46.621928 | orchestrator | 00:01:46.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=df83d2dc-f695-4d60-b23d-cf602fc737d6] 2025-05-04 00:01:48.703216 | orchestrator | 00:01:48.702 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=c9da6d9d-399f-4ba6-9766-b4dc68a9f1c2] 2025-05-04 00:01:48.708851 | orchestrator | 00:01:48.708 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-04 00:01:48.711210 | orchestrator | 00:01:48.710 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-04 00:01:48.711497 | orchestrator | 00:01:48.711 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-04 00:01:48.866701 | orchestrator | 00:01:48.866 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f1ba758e-d207-4ac7-a4bf-250450807cfd] 2025-05-04 00:01:48.874431 | orchestrator | 00:01:48.874 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-04 00:01:48.874564 | orchestrator | 00:01:48.874 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=cc81729b-ef49-4dc9-8db8-61683645cce6] 2025-05-04 00:01:48.886128 | orchestrator | 00:01:48.882 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-04 00:01:48.888888 | orchestrator | 00:01:48.883 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-04 00:01:48.888927 | orchestrator | 00:01:48.884 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-04 00:01:48.888938 | orchestrator | 00:01:48.884 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-04 00:01:48.888953 | orchestrator | 00:01:48.888 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-04 00:01:48.889419 | orchestrator | 00:01:48.889 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-04 00:01:48.893249 | orchestrator | 00:01:48.893 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-04 00:01:48.893505 | orchestrator | 00:01:48.893 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-04 00:01:49.010552 | orchestrator | 00:01:49.009 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=3a768830-c35c-46e9-ace8-73afa7e73bf0] 2025-05-04 00:01:49.024182 | orchestrator | 00:01:49.023 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-04 00:01:49.037604 | orchestrator | 00:01:49.037 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=b8ef1896-6c19-4da5-8126-6c6bce0f7c59] 2025-05-04 00:01:49.047645 | orchestrator | 00:01:49.047 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-04 00:01:49.145867 | orchestrator | 00:01:49.145 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=013e0aac-8773-4d59-8033-130d712cd8eb] 2025-05-04 00:01:49.159963 | orchestrator | 00:01:49.159 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-04 00:01:49.252106 | orchestrator | 00:01:49.251 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=4a3059ef-0946-4d98-b644-9de4fafb89ef] 2025-05-04 00:01:49.258314 | orchestrator | 00:01:49.257 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-04 00:01:49.361075 | orchestrator | 00:01:49.360 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=b8a24a8e-e03b-485a-9a41-673f484bbe91] 2025-05-04 00:01:49.374677 | orchestrator | 00:01:49.374 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-04 00:01:49.387605 | orchestrator | 00:01:49.387 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=1d488f53-421b-4ebf-866a-e547b8e3e659] 2025-05-04 00:01:49.392669 | orchestrator | 00:01:49.392 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-04 00:01:49.538615 | orchestrator | 00:01:49.538 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=abe899af-6054-414a-8fd9-78cf01d734d6] 2025-05-04 00:01:49.544834 | orchestrator | 00:01:49.544 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-04 00:01:49.556231 | orchestrator | 00:01:49.555 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=bc4d4542-a4b0-4f25-9554-837c52ba85ae] 2025-05-04 00:01:49.672890 | orchestrator | 00:01:49.672 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=a50871a6-d929-4426-be30-201128c012f3] 2025-05-04 00:01:54.547701 | orchestrator | 00:01:54.547 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=bd28d316-7724-4439-949f-1b91493f66a5] 2025-05-04 00:01:54.673577 | orchestrator | 00:01:54.673 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=4967464c-ade0-4d86-ba70-ba21510038e5] 2025-05-04 00:01:54.872318 | orchestrator | 00:01:54.871 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=c3368b08-b762-4dfe-a4ba-a6901fecdd53] 2025-05-04 00:01:55.052204 | orchestrator | 00:01:55.051 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=7ddda34a-23c0-46f0-a1f1-441e5fafeb58] 2025-05-04 00:01:55.229523 | orchestrator | 00:01:55.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=e75119a0-6210-4f15-a7f6-c75b278489bf] 2025-05-04 00:01:55.342327 | orchestrator | 00:01:55.341 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=334c8cc6-831e-4f07-92f7-d2b39358fa90] 2025-05-04 00:01:55.708129 | orchestrator | 00:01:55.707 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 7s [id=fa71e89c-cafb-4947-bd51-8d1b5d1dc7dc] 2025-05-04 00:01:56.340976 | orchestrator | 00:01:56.340 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=78699faf-2e80-4c1c-ab03-4758129e382d] 2025-05-04 00:01:56.370227 | orchestrator | 00:01:56.369 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-04 00:01:56.384994 | orchestrator | 00:01:56.384 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-04 00:01:56.385296 | orchestrator | 00:01:56.385 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-04 00:01:56.386204 | orchestrator | 00:01:56.386 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-04 00:01:56.393978 | orchestrator | 00:01:56.393 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-04 00:01:56.399408 | orchestrator | 00:01:56.399 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-04 00:01:56.408975 | orchestrator | 00:01:56.406 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-04 00:02:02.709479 | orchestrator | 00:02:02.708 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=2c9085f9-e477-46c7-9945-e8b556d1b0b2] 2025-05-04 00:02:02.718255 | orchestrator | 00:02:02.717 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-04 00:02:02.725343 | orchestrator | 00:02:02.725 STDOUT terraform: local_file.inventory: Creating... 2025-05-04 00:02:02.725879 | orchestrator | 00:02:02.725 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-04 00:02:02.730394 | orchestrator | 00:02:02.730 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=177ba9d57ab62075c6ad3b968129aaab9894543c] 2025-05-04 00:02:02.733531 | orchestrator | 00:02:02.733 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fc16b24d29d3dce1b5d3b8d965486aae66462f87] 2025-05-04 00:02:03.227412 | orchestrator | 00:02:03.226 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=2c9085f9-e477-46c7-9945-e8b556d1b0b2] 2025-05-04 00:02:06.388906 | orchestrator | 00:02:06.388 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-04 00:02:06.389072 | orchestrator | 00:02:06.388 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-04 00:02:06.389136 | orchestrator | 00:02:06.389 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-04 00:02:06.398338 | orchestrator | 00:02:06.398 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-04 00:02:06.400533 | orchestrator | 00:02:06.400 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-04 00:02:06.411885 | orchestrator | 00:02:06.411 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-04 00:02:16.389896 | orchestrator | 00:02:16.389 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-04 00:02:16.390113 | orchestrator | 00:02:16.389 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-04 00:02:16.390145 | orchestrator | 00:02:16.389 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-04 00:02:16.398870 | orchestrator | 00:02:16.398 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-04 00:02:16.400997 | orchestrator | 00:02:16.400 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-04 00:02:16.412466 | orchestrator | 00:02:16.412 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-04 00:02:16.930506 | orchestrator | 00:02:16.930 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=76ccc974-3d06-4c61-a521-cd6385e0e38e] 2025-05-04 00:02:16.942882 | orchestrator | 00:02:16.942 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=fc30e051-df26-454f-b755-ce610d311ae9] 2025-05-04 00:02:16.975099 | orchestrator | 00:02:16.974 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=ac9a029c-ed7a-4b47-8a49-b5fcd08f5dec] 2025-05-04 00:02:17.091068 | orchestrator | 00:02:17.090 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=5291009c-c967-480f-9a3d-0c6e5553c4b7] 2025-05-04 00:02:19.320300 | orchestrator | 00:02:19.319 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 23s [id=03e42279-f690-40d7-83b6-058e78326d5d] 2025-05-04 00:02:26.390633 | orchestrator | 00:02:26.390 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-04 00:02:27.808073 | orchestrator | 00:02:27.807 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=6a885672-57de-4e3b-b10f-dbd17d895d2a] 2025-05-04 00:02:27.819176 | orchestrator | 00:02:27.818 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-04 00:02:27.832871 | orchestrator | 00:02:27.832 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6822379599242796692] 2025-05-04 00:02:27.841992 | orchestrator | 00:02:27.841 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-04 00:02:27.842573 | orchestrator | 00:02:27.842 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-04 00:02:27.854324 | orchestrator | 00:02:27.854 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-05-04 00:02:27.861170 | orchestrator | 00:02:27.861 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-05-04 00:02:27.865594 | orchestrator | 00:02:27.865 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-05-04 00:02:27.868240 | orchestrator | 00:02:27.865 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-04 00:02:27.868278 | orchestrator | 00:02:27.868 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-04 00:02:27.869352 | orchestrator | 00:02:27.869 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-04 00:02:27.871250 | orchestrator | 00:02:27.871 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-05-04 00:02:27.875435 | orchestrator | 00:02:27.875 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-05-04 00:02:33.247299 | orchestrator | 00:02:33.246 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=fc30e051-df26-454f-b755-ce610d311ae9/10380154-7d57-4db6-80c5-fea690e2f123] 2025-05-04 00:02:33.257405 | orchestrator | 00:02:33.256 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=76ccc974-3d06-4c61-a521-cd6385e0e38e/9f40ab83-2cd9-4bf4-a5ce-fe50f63fc73a] 2025-05-04 00:02:33.266977 | orchestrator | 00:02:33.266 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-04 00:02:33.273104 | orchestrator | 00:02:33.272 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=03e42279-f690-40d7-83b6-058e78326d5d/f0e304d0-da68-45fd-ab80-c7aa1a870cfc] 2025-05-04 00:02:33.275925 | orchestrator | 00:02:33.275 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-04 00:02:33.279149 | orchestrator | 00:02:33.278 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=76ccc974-3d06-4c61-a521-cd6385e0e38e/44aea083-53c7-4db3-b476-f0e15c33499e] 2025-05-04 00:02:33.286846 | orchestrator | 00:02:33.286 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=fc30e051-df26-454f-b755-ce610d311ae9/4238a5d3-6f9a-453b-8646-1f6e7fcf7783] 2025-05-04 00:02:33.287259 | orchestrator | 00:02:33.286 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=6a885672-57de-4e3b-b10f-dbd17d895d2a/3434c0cd-230e-4587-95bc-9baf80b8630f] 2025-05-04 00:02:33.288172 | orchestrator | 00:02:33.287 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=ac9a029c-ed7a-4b47-8a49-b5fcd08f5dec/887667df-8a23-4f97-9ff0-05cbc5f29729] 2025-05-04 00:02:33.292934 | orchestrator | 00:02:33.292 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-05-04 00:02:33.295114 | orchestrator | 00:02:33.294 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-05-04 00:02:33.299141 | orchestrator | 00:02:33.299 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-04 00:02:33.303105 | orchestrator | 00:02:33.302 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=ac9a029c-ed7a-4b47-8a49-b5fcd08f5dec/12665b64-aca9-4755-9dee-a26132b82b0a] 2025-05-04 00:02:33.304460 | orchestrator | 00:02:33.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=5291009c-c967-480f-9a3d-0c6e5553c4b7/a335202a-bc46-4a1a-9390-24712f04f8da] 2025-05-04 00:02:33.305571 | orchestrator | 00:02:33.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-04 00:02:33.308504 | orchestrator | 00:02:33.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-05-04 00:02:33.318882 | orchestrator | 00:02:33.318 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-05-04 00:02:33.324891 | orchestrator | 00:02:33.324 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-04 00:02:33.330900 | orchestrator | 00:02:33.330 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=6a885672-57de-4e3b-b10f-dbd17d895d2a/fce9c480-0ce5-4d2c-b3f0-14cdf3862254] 2025-05-04 00:02:38.600510 | orchestrator | 00:02:38.600 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=5291009c-c967-480f-9a3d-0c6e5553c4b7/843cf234-6aef-404a-a841-1f1650f95beb] 2025-05-04 00:02:38.610596 | orchestrator | 00:02:38.610 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=ac9a029c-ed7a-4b47-8a49-b5fcd08f5dec/228c4a8e-d362-4d42-8ea3-c65a43234221] 2025-05-04 00:02:38.623857 | orchestrator | 00:02:38.623 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=fc30e051-df26-454f-b755-ce610d311ae9/41a828c4-aadc-4592-9baf-1de326a5c86d] 2025-05-04 00:02:38.641989 | orchestrator | 00:02:38.641 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=03e42279-f690-40d7-83b6-058e78326d5d/9737e10e-3051-48df-9cd6-5b074c161c93] 2025-05-04 00:02:38.650575 | orchestrator | 00:02:38.650 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=76ccc974-3d06-4c61-a521-cd6385e0e38e/f4ddea5b-b8af-4ee0-9445-5b6c1bebc06b] 2025-05-04 00:02:38.655945 | orchestrator | 00:02:38.655 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=03e42279-f690-40d7-83b6-058e78326d5d/e986bc1a-3638-41fe-8757-5755b3d430d7] 2025-05-04 00:02:38.677937 | orchestrator | 00:02:38.677 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=6a885672-57de-4e3b-b10f-dbd17d895d2a/5892b7dc-a458-477e-893f-beef3eb00cef] 2025-05-04 00:02:38.679247 | orchestrator | 00:02:38.679 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=5291009c-c967-480f-9a3d-0c6e5553c4b7/e6952e91-4add-41f4-9682-2820842eaefb] 2025-05-04 00:02:43.327582 | orchestrator | 00:02:43.327 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-04 00:02:53.333099 | orchestrator | 00:02:53.332 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-04 00:02:54.055407 | orchestrator | 00:02:54.055 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=5055997a-15f8-4f27-b5ae-faa2f712f37a] 2025-05-04 00:02:54.069414 | orchestrator | 00:02:54.069 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-05-04 00:02:54.069506 | orchestrator | 00:02:54.069 STDOUT terraform: Outputs: 2025-05-04 00:02:54.069524 | orchestrator | 00:02:54.069 STDOUT terraform: manager_address = 2025-05-04 00:02:54.069561 | orchestrator | 00:02:54.069 STDOUT terraform: private_key = 2025-05-04 00:03:04.270075 | orchestrator | changed 2025-05-04 00:03:04.315464 | 2025-05-04 00:03:04.315576 | TASK [Fetch manager address] 2025-05-04 00:03:04.713113 | orchestrator | ok 2025-05-04 00:03:04.723320 | 2025-05-04 00:03:04.723406 | TASK [Set manager_host address] 2025-05-04 00:03:04.827491 | orchestrator | ok 2025-05-04 00:03:04.839316 | 2025-05-04 00:03:04.839424 | LOOP [Update ansible collections] 2025-05-04 00:03:05.770894 | orchestrator | changed 2025-05-04 00:03:06.620091 | orchestrator | changed 2025-05-04 00:03:06.640124 | 2025-05-04 00:03:06.640256 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-04 00:03:17.160695 | orchestrator | ok 2025-05-04 00:03:17.173619 | 2025-05-04 00:03:17.173751 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-04 00:04:17.215910 | orchestrator | ok 2025-05-04 00:04:17.227985 | 2025-05-04 00:04:17.228114 | TASK [Fetch manager ssh hostkey] 2025-05-04 00:04:18.775903 | orchestrator | Output suppressed because no_log was given 2025-05-04 00:04:18.796856 | 2025-05-04 00:04:18.797033 | TASK [Get ssh keypair from terraform environment] 2025-05-04 00:04:19.379843 | orchestrator | changed 2025-05-04 00:04:19.400250 | 2025-05-04 00:04:19.400459 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-04 00:04:19.452731 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-04 00:04:19.464119 | 2025-05-04 00:04:19.464240 | TASK [Run manager part 0] 2025-05-04 00:04:20.329750 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-04 00:04:20.373470 | orchestrator | 2025-05-04 00:04:22.161725 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-04 00:04:22.161782 | orchestrator | 2025-05-04 00:04:22.161809 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-04 00:04:22.161828 | orchestrator | ok: [testbed-manager] 2025-05-04 00:04:24.872967 | orchestrator | 2025-05-04 00:04:24.873064 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-04 00:04:24.873080 | orchestrator | 2025-05-04 00:04:24.873087 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:04:24.873101 | orchestrator | ok: [testbed-manager] 2025-05-04 00:04:25.543049 | orchestrator | 2025-05-04 00:04:25.543117 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-04 00:04:25.543133 | orchestrator | ok: [testbed-manager] 2025-05-04 00:04:25.591036 | orchestrator | 2025-05-04 00:04:25.591095 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-04 00:04:25.591112 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:25.632474 | orchestrator | 2025-05-04 00:04:25.632526 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-04 00:04:25.632541 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:25.657933 | orchestrator | 2025-05-04 00:04:25.657977 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-04 00:04:25.658003 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:25.691349 | orchestrator | 2025-05-04 00:04:25.691398 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-04 00:04:25.691412 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:25.726542 | orchestrator | 2025-05-04 00:04:25.726586 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-04 00:04:25.726599 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:25.763473 | orchestrator | 2025-05-04 00:04:25.763502 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-04 00:04:25.763515 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:25.800408 | orchestrator | 2025-05-04 00:04:25.800454 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-04 00:04:25.800468 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:04:26.612351 | orchestrator | 2025-05-04 00:04:26.612422 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-04 00:04:26.612445 | orchestrator | changed: [testbed-manager] 2025-05-04 00:07:17.049187 | orchestrator | 2025-05-04 00:07:17.049298 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-04 00:07:17.049344 | orchestrator | changed: [testbed-manager] 2025-05-04 00:08:34.692770 | orchestrator | 2025-05-04 00:08:34.692828 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-04 00:08:34.692848 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:00.589805 | orchestrator | 2025-05-04 00:09:00.589920 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-04 00:09:00.589999 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:10.142849 | orchestrator | 2025-05-04 00:09:10.142996 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-04 00:09:10.143034 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:10.182598 | orchestrator | 2025-05-04 00:09:10.182674 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-04 00:09:10.182715 | orchestrator | ok: [testbed-manager] 2025-05-04 00:09:10.944629 | orchestrator | 2025-05-04 00:09:10.944769 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-04 00:09:10.944793 | orchestrator | ok: [testbed-manager] 2025-05-04 00:09:11.687832 | orchestrator | 2025-05-04 00:09:11.687974 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-04 00:09:11.688025 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:18.053025 | orchestrator | 2025-05-04 00:09:18.053133 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-04 00:09:18.053170 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:24.012535 | orchestrator | 2025-05-04 00:09:24.012689 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-04 00:09:24.012749 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:26.574930 | orchestrator | 2025-05-04 00:09:26.575063 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-04 00:09:26.575101 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:28.287257 | orchestrator | 2025-05-04 00:09:28.287899 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-04 00:09:28.287931 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:29.392182 | orchestrator | 2025-05-04 00:09:29.392264 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-04 00:09:29.392297 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-04 00:09:29.435004 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-04 00:09:29.435122 | orchestrator | 2025-05-04 00:09:29.435153 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-04 00:09:29.435195 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-04 00:09:32.638443 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-04 00:09:32.638497 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-04 00:09:32.638508 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-04 00:09:32.638524 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-04 00:09:33.202699 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-04 00:09:33.202805 | orchestrator | 2025-05-04 00:09:33.202826 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-04 00:09:33.202857 | orchestrator | changed: [testbed-manager] 2025-05-04 00:09:55.555084 | orchestrator | 2025-05-04 00:09:55.555195 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-04 00:09:55.555231 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-04 00:09:57.836820 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-04 00:09:57.836918 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-04 00:09:57.836937 | orchestrator | 2025-05-04 00:09:57.837019 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-04 00:09:57.837051 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-04 00:09:59.217034 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-04 00:09:59.217142 | orchestrator | 2025-05-04 00:09:59.217163 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-04 00:09:59.217179 | orchestrator | 2025-05-04 00:09:59.217194 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:09:59.217224 | orchestrator | ok: [testbed-manager] 2025-05-04 00:09:59.267573 | orchestrator | 2025-05-04 00:09:59.267658 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-04 00:09:59.267689 | orchestrator | ok: [testbed-manager] 2025-05-04 00:09:59.338226 | orchestrator | 2025-05-04 00:09:59.338327 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-04 00:09:59.338360 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:00.081458 | orchestrator | 2025-05-04 00:10:00.082329 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-04 00:10:00.082383 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:00.811660 | orchestrator | 2025-05-04 00:10:00.811771 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-04 00:10:00.811809 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:02.232041 | orchestrator | 2025-05-04 00:10:02.232137 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-04 00:10:02.232173 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-04 00:10:03.587290 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-04 00:10:03.587394 | orchestrator | 2025-05-04 00:10:03.587414 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-04 00:10:03.587444 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:05.343935 | orchestrator | 2025-05-04 00:10:05.344657 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-04 00:10:05.344701 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:10:05.924084 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-04 00:10:05.924181 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:10:05.924202 | orchestrator | 2025-05-04 00:10:05.924218 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-04 00:10:05.924248 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:05.990808 | orchestrator | 2025-05-04 00:10:05.990873 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-04 00:10:05.990899 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:06.858383 | orchestrator | 2025-05-04 00:10:06.858451 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-04 00:10:06.858472 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:10:06.896190 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:06.896251 | orchestrator | 2025-05-04 00:10:06.896261 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-04 00:10:06.896278 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:06.928110 | orchestrator | 2025-05-04 00:10:06.928191 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-04 00:10:06.928219 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:06.967226 | orchestrator | 2025-05-04 00:10:06.967333 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-04 00:10:06.967369 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:07.023215 | orchestrator | 2025-05-04 00:10:07.023312 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-04 00:10:07.023343 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:07.762181 | orchestrator | 2025-05-04 00:10:07.762285 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-04 00:10:07.762321 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:09.160277 | orchestrator | 2025-05-04 00:10:09.160366 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-04 00:10:09.160386 | orchestrator | 2025-05-04 00:10:09.160401 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:10:09.160429 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:10.110849 | orchestrator | 2025-05-04 00:10:10.110976 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-04 00:10:10.111013 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:10.211701 | orchestrator | 2025-05-04 00:10:10.211808 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:10:10.211826 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-04 00:10:10.211841 | orchestrator | 2025-05-04 00:10:10.267114 | orchestrator | changed 2025-05-04 00:10:10.279134 | 2025-05-04 00:10:10.279243 | TASK [Point out that the log in on the manager is now possible] 2025-05-04 00:10:10.326710 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-04 00:10:10.337840 | 2025-05-04 00:10:10.337962 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-04 00:10:10.383167 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-04 00:10:10.395091 | 2025-05-04 00:10:10.395223 | TASK [Run manager part 1 + 2] 2025-05-04 00:10:11.265573 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-04 00:10:11.320912 | orchestrator | 2025-05-04 00:10:13.795066 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-04 00:10:13.795131 | orchestrator | 2025-05-04 00:10:13.795153 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:10:13.795169 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:13.838127 | orchestrator | 2025-05-04 00:10:13.838229 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-04 00:10:13.838268 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:13.883036 | orchestrator | 2025-05-04 00:10:13.883103 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-04 00:10:13.883124 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:13.927719 | orchestrator | 2025-05-04 00:10:13.927787 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-04 00:10:13.927807 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:14.003595 | orchestrator | 2025-05-04 00:10:14.003663 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-04 00:10:14.003685 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:14.074553 | orchestrator | 2025-05-04 00:10:14.074614 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-04 00:10:14.074633 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:14.119479 | orchestrator | 2025-05-04 00:10:14.119538 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-04 00:10:14.119555 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-04 00:10:14.854929 | orchestrator | 2025-05-04 00:10:14.855027 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-04 00:10:14.855049 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:14.903111 | orchestrator | 2025-05-04 00:10:14.903176 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-04 00:10:14.903196 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:16.277902 | orchestrator | 2025-05-04 00:10:16.277992 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-04 00:10:16.278047 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:16.851640 | orchestrator | 2025-05-04 00:10:16.851698 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-04 00:10:16.851716 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:17.988217 | orchestrator | 2025-05-04 00:10:17.988289 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-04 00:10:17.988310 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:29.205291 | orchestrator | 2025-05-04 00:10:29.205371 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-04 00:10:29.205401 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:29.841819 | orchestrator | 2025-05-04 00:10:29.841863 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-04 00:10:29.841881 | orchestrator | ok: [testbed-manager] 2025-05-04 00:10:29.895733 | orchestrator | 2025-05-04 00:10:29.895779 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-04 00:10:29.895796 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:30.823511 | orchestrator | 2025-05-04 00:10:30.823625 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-04 00:10:30.823662 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:31.811430 | orchestrator | 2025-05-04 00:10:31.811542 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-04 00:10:31.811581 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:32.395248 | orchestrator | 2025-05-04 00:10:32.395893 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-04 00:10:32.395966 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:32.433498 | orchestrator | 2025-05-04 00:10:32.433609 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-04 00:10:32.433642 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-04 00:10:34.788914 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-04 00:10:34.789006 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-04 00:10:34.789018 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-04 00:10:34.789034 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:43.611074 | orchestrator | 2025-05-04 00:10:43.611158 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-04 00:10:43.611185 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-04 00:10:44.674999 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-04 00:10:44.675056 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-04 00:10:44.675066 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-04 00:10:44.675076 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-04 00:10:44.675084 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-04 00:10:44.675093 | orchestrator | 2025-05-04 00:10:44.675101 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-04 00:10:44.675127 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:44.719840 | orchestrator | 2025-05-04 00:10:44.719901 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-04 00:10:44.719921 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:10:47.650149 | orchestrator | 2025-05-04 00:10:47.650256 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-04 00:10:47.650292 | orchestrator | changed: [testbed-manager] 2025-05-04 00:10:47.691181 | orchestrator | 2025-05-04 00:10:47.691286 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-04 00:10:47.691320 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:12:26.774976 | orchestrator | 2025-05-04 00:12:26.775027 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-04 00:12:26.775041 | orchestrator | changed: [testbed-manager] 2025-05-04 00:12:27.844541 | orchestrator | 2025-05-04 00:12:27.844589 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-04 00:12:27.844605 | orchestrator | ok: [testbed-manager] 2025-05-04 00:12:27.946085 | orchestrator | 2025-05-04 00:12:27.946302 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:12:27.946315 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-04 00:12:27.946322 | orchestrator | 2025-05-04 00:12:28.042795 | orchestrator | changed 2025-05-04 00:12:28.062153 | 2025-05-04 00:12:28.062293 | TASK [Reboot manager] 2025-05-04 00:12:29.607367 | orchestrator | changed 2025-05-04 00:12:29.627561 | 2025-05-04 00:12:29.627732 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-04 00:12:43.759151 | orchestrator | ok 2025-05-04 00:12:43.771022 | 2025-05-04 00:12:43.771153 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-04 00:13:43.824070 | orchestrator | ok 2025-05-04 00:13:43.835284 | 2025-05-04 00:13:43.835465 | TASK [Deploy manager + bootstrap nodes] 2025-05-04 00:13:46.069235 | orchestrator | 2025-05-04 00:13:46.073292 | orchestrator | # DEPLOY MANAGER 2025-05-04 00:13:46.073342 | orchestrator | 2025-05-04 00:13:46.073377 | orchestrator | + set -e 2025-05-04 00:13:46.073415 | orchestrator | + echo 2025-05-04 00:13:46.073434 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-04 00:13:46.073451 | orchestrator | + echo 2025-05-04 00:13:46.073472 | orchestrator | + cat /opt/manager-vars.sh 2025-05-04 00:13:46.073509 | orchestrator | export NUMBER_OF_NODES=6 2025-05-04 00:13:46.073578 | orchestrator | 2025-05-04 00:13:46.073607 | orchestrator | export CEPH_VERSION=reef 2025-05-04 00:13:46.073633 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-04 00:13:46.073661 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-04 00:13:46.073686 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-04 00:13:46.073702 | orchestrator | 2025-05-04 00:13:46.073717 | orchestrator | export ARA=false 2025-05-04 00:13:46.073732 | orchestrator | export TEMPEST=false 2025-05-04 00:13:46.073746 | orchestrator | export IS_ZUUL=true 2025-05-04 00:13:46.073760 | orchestrator | 2025-05-04 00:13:46.073774 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:13:46.073788 | orchestrator | export EXTERNAL_API=false 2025-05-04 00:13:46.073802 | orchestrator | 2025-05-04 00:13:46.073816 | orchestrator | export IMAGE_USER=ubuntu 2025-05-04 00:13:46.073830 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-04 00:13:46.073846 | orchestrator | 2025-05-04 00:13:46.073859 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-04 00:13:46.073913 | orchestrator | 2025-05-04 00:13:46.074501 | orchestrator | + echo 2025-05-04 00:13:46.074536 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-04 00:13:46.074556 | orchestrator | ++ export INTERACTIVE=false 2025-05-04 00:13:46.074686 | orchestrator | ++ INTERACTIVE=false 2025-05-04 00:13:46.074712 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-04 00:13:46.074736 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-04 00:13:46.074756 | orchestrator | + source /opt/manager-vars.sh 2025-05-04 00:13:46.074931 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-04 00:13:46.074957 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-04 00:13:46.074971 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-04 00:13:46.074985 | orchestrator | ++ CEPH_VERSION=reef 2025-05-04 00:13:46.075005 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-04 00:13:46.075071 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-04 00:13:46.075096 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-04 00:13:46.075110 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-04 00:13:46.075124 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-04 00:13:46.075138 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-04 00:13:46.075152 | orchestrator | ++ export ARA=false 2025-05-04 00:13:46.075166 | orchestrator | ++ ARA=false 2025-05-04 00:13:46.075180 | orchestrator | ++ export TEMPEST=false 2025-05-04 00:13:46.075193 | orchestrator | ++ TEMPEST=false 2025-05-04 00:13:46.075207 | orchestrator | ++ export IS_ZUUL=true 2025-05-04 00:13:46.075221 | orchestrator | ++ IS_ZUUL=true 2025-05-04 00:13:46.075235 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:13:46.075249 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:13:46.075270 | orchestrator | ++ export EXTERNAL_API=false 2025-05-04 00:13:46.075284 | orchestrator | ++ EXTERNAL_API=false 2025-05-04 00:13:46.075298 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-04 00:13:46.075312 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-04 00:13:46.075326 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-04 00:13:46.075340 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-04 00:13:46.075357 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-04 00:13:46.075371 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-04 00:13:46.075389 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-04 00:13:46.129627 | orchestrator | + docker version 2025-05-04 00:13:46.387797 | orchestrator | Client: Docker Engine - Community 2025-05-04 00:13:46.388083 | orchestrator | Version: 26.1.4 2025-05-04 00:13:46.388126 | orchestrator | API version: 1.45 2025-05-04 00:13:46.388141 | orchestrator | Go version: go1.21.11 2025-05-04 00:13:46.388166 | orchestrator | Git commit: 5650f9b 2025-05-04 00:13:46.388182 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-04 00:13:46.388197 | orchestrator | OS/Arch: linux/amd64 2025-05-04 00:13:46.388211 | orchestrator | Context: default 2025-05-04 00:13:46.388226 | orchestrator | 2025-05-04 00:13:46.388246 | orchestrator | Server: Docker Engine - Community 2025-05-04 00:13:46.388306 | orchestrator | Engine: 2025-05-04 00:13:46.388332 | orchestrator | Version: 26.1.4 2025-05-04 00:13:46.388352 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-04 00:13:46.388366 | orchestrator | Go version: go1.21.11 2025-05-04 00:13:46.388382 | orchestrator | Git commit: de5c9cf 2025-05-04 00:13:46.388434 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-04 00:13:46.388448 | orchestrator | OS/Arch: linux/amd64 2025-05-04 00:13:46.388463 | orchestrator | Experimental: false 2025-05-04 00:13:46.388476 | orchestrator | containerd: 2025-05-04 00:13:46.388490 | orchestrator | Version: 1.7.27 2025-05-04 00:13:46.388504 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-04 00:13:46.388518 | orchestrator | runc: 2025-05-04 00:13:46.388532 | orchestrator | Version: 1.2.5 2025-05-04 00:13:46.388561 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-04 00:13:46.388575 | orchestrator | docker-init: 2025-05-04 00:13:46.388617 | orchestrator | Version: 0.19.0 2025-05-04 00:13:46.391246 | orchestrator | GitCommit: de40ad0 2025-05-04 00:13:46.391293 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-04 00:13:46.400595 | orchestrator | + set -e 2025-05-04 00:13:46.400641 | orchestrator | + source /opt/manager-vars.sh 2025-05-04 00:13:46.400686 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-04 00:13:46.400701 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-04 00:13:46.400716 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-04 00:13:46.400730 | orchestrator | ++ CEPH_VERSION=reef 2025-05-04 00:13:46.400745 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-04 00:13:46.400759 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-04 00:13:46.400773 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-04 00:13:46.400787 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-04 00:13:46.400802 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-04 00:13:46.400816 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-04 00:13:46.400829 | orchestrator | ++ export ARA=false 2025-05-04 00:13:46.400843 | orchestrator | ++ ARA=false 2025-05-04 00:13:46.400857 | orchestrator | ++ export TEMPEST=false 2025-05-04 00:13:46.400871 | orchestrator | ++ TEMPEST=false 2025-05-04 00:13:46.400932 | orchestrator | ++ export IS_ZUUL=true 2025-05-04 00:13:46.400947 | orchestrator | ++ IS_ZUUL=true 2025-05-04 00:13:46.400967 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:13:46.401207 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:13:46.401227 | orchestrator | ++ export EXTERNAL_API=false 2025-05-04 00:13:46.401241 | orchestrator | ++ EXTERNAL_API=false 2025-05-04 00:13:46.401255 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-04 00:13:46.401269 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-04 00:13:46.401284 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-04 00:13:46.401298 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-04 00:13:46.401312 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-04 00:13:46.401326 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-04 00:13:46.401340 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-04 00:13:46.401361 | orchestrator | ++ export INTERACTIVE=false 2025-05-04 00:13:46.401375 | orchestrator | ++ INTERACTIVE=false 2025-05-04 00:13:46.401389 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-04 00:13:46.401403 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-04 00:13:46.401429 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-04 00:13:46.409420 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-04 00:13:46.409510 | orchestrator | + set -e 2025-05-04 00:13:46.417466 | orchestrator | + VERSION=8.1.0 2025-05-04 00:13:46.417505 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-04 00:13:46.417546 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-04 00:13:46.422377 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-04 00:13:46.422422 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-04 00:13:46.426174 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-04 00:13:46.435810 | orchestrator | /opt/configuration ~ 2025-05-04 00:13:46.438631 | orchestrator | + set -e 2025-05-04 00:13:46.438661 | orchestrator | + pushd /opt/configuration 2025-05-04 00:13:46.438676 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-04 00:13:46.438697 | orchestrator | + source /opt/venv/bin/activate 2025-05-04 00:13:46.439637 | orchestrator | ++ deactivate nondestructive 2025-05-04 00:13:46.439833 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:46.440057 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:46.440085 | orchestrator | ++ hash -r 2025-05-04 00:13:46.440099 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:46.440113 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-04 00:13:46.440127 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-04 00:13:46.440156 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-04 00:13:46.440199 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-04 00:13:46.440214 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-04 00:13:46.440234 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-04 00:13:46.440398 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-04 00:13:46.440416 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-04 00:13:46.440431 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-04 00:13:46.440445 | orchestrator | ++ export PATH 2025-05-04 00:13:46.440460 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:46.440474 | orchestrator | ++ '[' -z '' ']' 2025-05-04 00:13:46.440488 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-04 00:13:46.440502 | orchestrator | ++ PS1='(venv) ' 2025-05-04 00:13:46.440516 | orchestrator | ++ export PS1 2025-05-04 00:13:46.440530 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-04 00:13:46.440551 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-04 00:13:46.440576 | orchestrator | ++ hash -r 2025-05-04 00:13:46.440611 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-04 00:13:47.433169 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-04 00:13:47.433792 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-04 00:13:47.435152 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-04 00:13:47.436676 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-04 00:13:47.437679 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-04 00:13:47.447796 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-05-04 00:13:47.449268 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-04 00:13:47.450321 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-04 00:13:47.452060 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-04 00:13:47.487633 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-04 00:13:47.489057 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-04 00:13:47.490523 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-04 00:13:47.492098 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-04 00:13:47.496330 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-04 00:13:47.702293 | orchestrator | ++ which gilt 2025-05-04 00:13:47.704521 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-04 00:13:47.915124 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-04 00:13:47.915260 | orchestrator | osism.cfg-generics: 2025-05-04 00:13:49.450776 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-04 00:13:49.450951 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-04 00:13:49.451234 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-04 00:13:49.451256 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-04 00:13:49.451277 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-04 00:13:50.398509 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-04 00:13:50.410083 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-04 00:13:50.702396 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-04 00:13:50.757460 | orchestrator | ~ 2025-05-04 00:13:50.759370 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-04 00:13:50.759433 | orchestrator | + deactivate 2025-05-04 00:13:50.759474 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-04 00:13:50.759495 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-04 00:13:50.759519 | orchestrator | + export PATH 2025-05-04 00:13:50.759545 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-04 00:13:50.759566 | orchestrator | + '[' -n '' ']' 2025-05-04 00:13:50.759581 | orchestrator | + hash -r 2025-05-04 00:13:50.759595 | orchestrator | + '[' -n '' ']' 2025-05-04 00:13:50.759608 | orchestrator | + unset VIRTUAL_ENV 2025-05-04 00:13:50.759626 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-04 00:13:50.759651 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-04 00:13:50.759674 | orchestrator | + unset -f deactivate 2025-05-04 00:13:50.759689 | orchestrator | + popd 2025-05-04 00:13:50.759714 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-04 00:13:50.814276 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-04 00:13:50.814373 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-04 00:13:50.814406 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-04 00:13:50.857183 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-04 00:13:50.857278 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-04 00:13:50.857312 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-04 00:13:51.975525 | orchestrator | + source /opt/venv/bin/activate 2025-05-04 00:13:51.975668 | orchestrator | ++ deactivate nondestructive 2025-05-04 00:13:51.975687 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:51.975702 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:51.975717 | orchestrator | ++ hash -r 2025-05-04 00:13:51.975731 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:51.975745 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-04 00:13:51.975759 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-04 00:13:51.975788 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-04 00:13:51.975804 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-04 00:13:51.975818 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-04 00:13:51.975833 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-04 00:13:51.975847 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-04 00:13:51.975862 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-04 00:13:51.975921 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-04 00:13:51.975936 | orchestrator | ++ export PATH 2025-05-04 00:13:51.975950 | orchestrator | ++ '[' -n '' ']' 2025-05-04 00:13:51.975964 | orchestrator | ++ '[' -z '' ']' 2025-05-04 00:13:51.975979 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-04 00:13:51.975993 | orchestrator | ++ PS1='(venv) ' 2025-05-04 00:13:51.976007 | orchestrator | ++ export PS1 2025-05-04 00:13:51.976021 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-04 00:13:51.976035 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-04 00:13:51.976053 | orchestrator | ++ hash -r 2025-05-04 00:13:51.976067 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-04 00:13:51.976102 | orchestrator | 2025-05-04 00:13:52.523980 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-04 00:13:52.524119 | orchestrator | 2025-05-04 00:13:52.524140 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-04 00:13:52.524175 | orchestrator | ok: [testbed-manager] 2025-05-04 00:13:53.487968 | orchestrator | 2025-05-04 00:13:53.488113 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-04 00:13:53.488153 | orchestrator | changed: [testbed-manager] 2025-05-04 00:13:55.812685 | orchestrator | 2025-05-04 00:13:55.812856 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-04 00:13:55.812916 | orchestrator | 2025-05-04 00:13:55.812932 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:13:55.812964 | orchestrator | ok: [testbed-manager] 2025-05-04 00:14:00.300485 | orchestrator | 2025-05-04 00:14:00.300644 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-04 00:14:00.300737 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-04 00:15:16.347755 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-05-04 00:15:16.348060 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-04 00:15:16.348090 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-04 00:15:16.348106 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-04 00:15:16.348122 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-05-04 00:15:16.348137 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-04 00:15:16.348151 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-04 00:15:16.348165 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-04 00:15:16.348188 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-05-04 00:15:16.348204 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-05-04 00:15:16.348218 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-05-04 00:15:16.348232 | orchestrator | 2025-05-04 00:15:16.348247 | orchestrator | TASK [Check status] ************************************************************ 2025-05-04 00:15:16.348279 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-04 00:15:16.399709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-04 00:15:16.399810 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-04 00:15:16.399826 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-04 00:15:16.399836 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-05-04 00:15:16.399871 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j565696352805.1590', 'results_file': '/home/dragon/.ansible_async/j565696352805.1590', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.399894 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j416927303713.1615', 'results_file': '/home/dragon/.ansible_async/j416927303713.1615', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.399904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-04 00:15:16.399914 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j166209072476.1640', 'results_file': '/home/dragon/.ansible_async/j166209072476.1640', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.399927 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j595158689602.1672', 'results_file': '/home/dragon/.ansible_async/j595158689602.1672', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.399942 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j822374265325.1704', 'results_file': '/home/dragon/.ansible_async/j822374265325.1704', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.399953 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j864763898774.1736', 'results_file': '/home/dragon/.ansible_async/j864763898774.1736', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.399964 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-04 00:15:16.399973 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j844781781978.1768', 'results_file': '/home/dragon/.ansible_async/j844781781978.1768', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.400008 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j558198911905.1800', 'results_file': '/home/dragon/.ansible_async/j558198911905.1800', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.400019 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j998048015819.1832', 'results_file': '/home/dragon/.ansible_async/j998048015819.1832', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.400029 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j858389393123.1864', 'results_file': '/home/dragon/.ansible_async/j858389393123.1864', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.400039 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j167269291495.1897', 'results_file': '/home/dragon/.ansible_async/j167269291495.1897', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.400049 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j637519678524.1931', 'results_file': '/home/dragon/.ansible_async/j637519678524.1931', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-04 00:15:16.400058 | orchestrator | 2025-05-04 00:15:16.400069 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-04 00:15:16.400093 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:16.858723 | orchestrator | 2025-05-04 00:15:16.858872 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-04 00:15:16.858911 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:17.187472 | orchestrator | 2025-05-04 00:15:17.187596 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-04 00:15:17.187629 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:17.518379 | orchestrator | 2025-05-04 00:15:17.518507 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-04 00:15:17.518541 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:17.574740 | orchestrator | 2025-05-04 00:15:17.574833 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-04 00:15:17.574916 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:15:17.892943 | orchestrator | 2025-05-04 00:15:17.893071 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-04 00:15:17.893106 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:17.997449 | orchestrator | 2025-05-04 00:15:17.997547 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-04 00:15:17.997578 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:15:19.790717 | orchestrator | 2025-05-04 00:15:19.790906 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-04 00:15:19.790929 | orchestrator | 2025-05-04 00:15:19.790945 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:15:19.790976 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:19.875034 | orchestrator | 2025-05-04 00:15:19.875143 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-04 00:15:19.875173 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-04 00:15:19.941176 | orchestrator | 2025-05-04 00:15:19.941299 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-04 00:15:19.941351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-04 00:15:21.009050 | orchestrator | 2025-05-04 00:15:21.009226 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-04 00:15:21.009266 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-04 00:15:22.791520 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-04 00:15:22.791669 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-04 00:15:22.791690 | orchestrator | 2025-05-04 00:15:22.791706 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-04 00:15:22.791739 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-04 00:15:23.420522 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-04 00:15:23.420673 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-04 00:15:23.420694 | orchestrator | 2025-05-04 00:15:23.420710 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-04 00:15:23.420742 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:15:24.044632 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:24.044766 | orchestrator | 2025-05-04 00:15:24.044787 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-04 00:15:24.044823 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:15:24.106439 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:24.106574 | orchestrator | 2025-05-04 00:15:24.106595 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-04 00:15:24.106627 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:15:24.473901 | orchestrator | 2025-05-04 00:15:24.474095 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-04 00:15:24.474137 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:24.530451 | orchestrator | 2025-05-04 00:15:24.530601 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-04 00:15:24.530638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-04 00:15:25.623349 | orchestrator | 2025-05-04 00:15:25.623481 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-04 00:15:25.623520 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:26.417282 | orchestrator | 2025-05-04 00:15:26.417422 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-04 00:15:26.417460 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:29.563141 | orchestrator | 2025-05-04 00:15:29.563301 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-04 00:15:29.563341 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:29.683986 | orchestrator | 2025-05-04 00:15:29.684142 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-04 00:15:29.684186 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-04 00:15:29.751576 | orchestrator | 2025-05-04 00:15:29.751699 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-04 00:15:29.751732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-04 00:15:32.365958 | orchestrator | 2025-05-04 00:15:32.366141 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-04 00:15:32.366182 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:32.475679 | orchestrator | 2025-05-04 00:15:32.475789 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-04 00:15:32.475823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-04 00:15:33.583019 | orchestrator | 2025-05-04 00:15:33.583155 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-04 00:15:33.583194 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-04 00:15:33.652737 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-04 00:15:33.652892 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-04 00:15:33.652912 | orchestrator | 2025-05-04 00:15:33.652928 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-04 00:15:33.652996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-04 00:15:34.301523 | orchestrator | 2025-05-04 00:15:34.301656 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-04 00:15:34.301693 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-04 00:15:34.942780 | orchestrator | 2025-05-04 00:15:34.942969 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-04 00:15:34.943018 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:35.575255 | orchestrator | 2025-05-04 00:15:35.575390 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-04 00:15:35.575428 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:15:35.966883 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:35.967016 | orchestrator | 2025-05-04 00:15:35.967037 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-04 00:15:35.967068 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:36.311704 | orchestrator | 2025-05-04 00:15:36.311874 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-04 00:15:36.311912 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:36.353053 | orchestrator | 2025-05-04 00:15:36.353141 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-04 00:15:36.353171 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:15:36.975416 | orchestrator | 2025-05-04 00:15:36.975554 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-04 00:15:36.975591 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:37.051336 | orchestrator | 2025-05-04 00:15:37.051460 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-04 00:15:37.051495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-04 00:15:37.790642 | orchestrator | 2025-05-04 00:15:37.790776 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-04 00:15:37.790813 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-04 00:15:38.445518 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-04 00:15:38.445661 | orchestrator | 2025-05-04 00:15:38.445686 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-04 00:15:38.445718 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-04 00:15:39.087285 | orchestrator | 2025-05-04 00:15:39.087423 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-04 00:15:39.087461 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:39.137091 | orchestrator | 2025-05-04 00:15:39.137210 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-04 00:15:39.137243 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:15:39.775703 | orchestrator | 2025-05-04 00:15:39.775896 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-04 00:15:39.775950 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:41.587221 | orchestrator | 2025-05-04 00:15:41.587371 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-04 00:15:41.587410 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:15:47.407014 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:15:47.407162 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:15:47.407184 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:47.407202 | orchestrator | 2025-05-04 00:15:47.407217 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-04 00:15:47.407251 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-04 00:15:47.988446 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-04 00:15:47.988583 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-04 00:15:47.988602 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-04 00:15:47.988618 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-04 00:15:47.988634 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-04 00:15:47.988684 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-04 00:15:47.988699 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-04 00:15:47.988714 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-04 00:15:47.988729 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-04 00:15:47.988743 | orchestrator | 2025-05-04 00:15:47.988759 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-04 00:15:47.988791 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-04 00:15:48.060310 | orchestrator | 2025-05-04 00:15:48.060411 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-04 00:15:48.060444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-04 00:15:48.744221 | orchestrator | 2025-05-04 00:15:48.744365 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-04 00:15:48.744403 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:49.335047 | orchestrator | 2025-05-04 00:15:49.335191 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-04 00:15:49.335225 | orchestrator | ok: [testbed-manager] 2025-05-04 00:15:50.037529 | orchestrator | 2025-05-04 00:15:50.037659 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-04 00:15:50.037696 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:55.795196 | orchestrator | 2025-05-04 00:15:55.795341 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-04 00:15:55.795381 | orchestrator | changed: [testbed-manager] 2025-05-04 00:15:56.742314 | orchestrator | 2025-05-04 00:15:56.742454 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-04 00:15:56.742492 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:18.983095 | orchestrator | 2025-05-04 00:16:18.983211 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-04 00:16:18.983234 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-04 00:16:19.043956 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:19.044070 | orchestrator | 2025-05-04 00:16:19.044082 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-04 00:16:19.044109 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:19.088646 | orchestrator | 2025-05-04 00:16:19.088753 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-04 00:16:19.088761 | orchestrator | 2025-05-04 00:16:19.088767 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-04 00:16:19.088785 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:19.174084 | orchestrator | 2025-05-04 00:16:19.174153 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-04 00:16:19.174171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-04 00:16:19.979051 | orchestrator | 2025-05-04 00:16:19.979194 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-04 00:16:19.979234 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:20.052054 | orchestrator | 2025-05-04 00:16:20.052179 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-04 00:16:20.052216 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:20.114282 | orchestrator | 2025-05-04 00:16:20.114355 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-04 00:16:20.114388 | orchestrator | ok: [testbed-manager] => { 2025-05-04 00:16:20.779080 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-04 00:16:20.779227 | orchestrator | } 2025-05-04 00:16:20.779242 | orchestrator | 2025-05-04 00:16:20.779255 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-04 00:16:20.779281 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:21.678480 | orchestrator | 2025-05-04 00:16:21.678626 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-04 00:16:21.678698 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:21.746252 | orchestrator | 2025-05-04 00:16:21.746335 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-04 00:16:21.746368 | orchestrator | ok: [testbed-manager] 2025-05-04 00:16:21.799145 | orchestrator | 2025-05-04 00:16:21.799187 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-04 00:16:21.799225 | orchestrator | ok: [testbed-manager] => { 2025-05-04 00:16:21.865581 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-04 00:16:21.865656 | orchestrator | } 2025-05-04 00:16:21.865671 | orchestrator | 2025-05-04 00:16:21.865686 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-04 00:16:21.865712 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:21.927187 | orchestrator | 2025-05-04 00:16:21.927215 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-04 00:16:21.927235 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:21.976925 | orchestrator | 2025-05-04 00:16:21.976953 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-04 00:16:21.976974 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:22.033698 | orchestrator | 2025-05-04 00:16:22.033793 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-04 00:16:22.033869 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:22.085468 | orchestrator | 2025-05-04 00:16:22.085570 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-04 00:16:22.085601 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:22.129631 | orchestrator | 2025-05-04 00:16:22.129739 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-04 00:16:22.129772 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:16:23.538969 | orchestrator | 2025-05-04 00:16:23.539116 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-04 00:16:23.539149 | orchestrator | changed: [testbed-manager] 2025-05-04 00:16:23.619511 | orchestrator | 2025-05-04 00:16:23.619649 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-04 00:16:23.619686 | orchestrator | ok: [testbed-manager] 2025-05-04 00:17:23.677854 | orchestrator | 2025-05-04 00:17:23.678140 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-04 00:17:23.678185 | orchestrator | Pausing for 60 seconds 2025-05-04 00:17:23.725215 | orchestrator | changed: [testbed-manager] 2025-05-04 00:17:23.725340 | orchestrator | 2025-05-04 00:17:23.725357 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-04 00:17:23.725395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-04 00:21:03.624748 | orchestrator | 2025-05-04 00:21:03.624913 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-04 00:21:03.624956 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-04 00:21:05.785432 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-04 00:21:05.785611 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-04 00:21:05.785633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-04 00:21:05.785648 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-04 00:21:05.785664 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-04 00:21:05.785678 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-04 00:21:05.785692 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-04 00:21:05.785707 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-04 00:21:05.785721 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-04 00:21:05.785769 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-04 00:21:05.785784 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-04 00:21:05.785798 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-04 00:21:05.785813 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-04 00:21:05.785827 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-04 00:21:05.785841 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-04 00:21:05.785856 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-04 00:21:05.785870 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-04 00:21:05.785888 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-04 00:21:05.785920 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-04 00:21:05.785937 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-04 00:21:05.785952 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:05.785968 | orchestrator | 2025-05-04 00:21:05.785984 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-04 00:21:05.785999 | orchestrator | 2025-05-04 00:21:05.786014 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:21:05.786115 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:05.895116 | orchestrator | 2025-05-04 00:21:05.895318 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-04 00:21:05.895359 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-04 00:21:05.966114 | orchestrator | 2025-05-04 00:21:05.966284 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-04 00:21:05.966322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-04 00:21:07.577607 | orchestrator | 2025-05-04 00:21:07.577720 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-04 00:21:07.577754 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:07.626344 | orchestrator | 2025-05-04 00:21:07.626452 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-04 00:21:07.626485 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:07.707864 | orchestrator | 2025-05-04 00:21:07.707960 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-04 00:21:07.707993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-04 00:21:10.305656 | orchestrator | 2025-05-04 00:21:10.305780 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-04 00:21:10.305818 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-04 00:21:10.876490 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-04 00:21:10.876605 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-04 00:21:10.876626 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-04 00:21:10.876641 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-04 00:21:10.876657 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-04 00:21:10.876671 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-04 00:21:10.876686 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-04 00:21:10.876700 | orchestrator | 2025-05-04 00:21:10.876715 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-04 00:21:10.876745 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:10.951511 | orchestrator | 2025-05-04 00:21:10.951616 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-04 00:21:10.951647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-04 00:21:12.152782 | orchestrator | 2025-05-04 00:21:12.152919 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-04 00:21:12.152956 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-04 00:21:12.764547 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-04 00:21:12.764684 | orchestrator | 2025-05-04 00:21:12.764706 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-04 00:21:12.764738 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:12.818649 | orchestrator | 2025-05-04 00:21:12.818735 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-04 00:21:12.818767 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:21:12.893815 | orchestrator | 2025-05-04 00:21:12.893919 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-04 00:21:12.893952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-04 00:21:14.244620 | orchestrator | 2025-05-04 00:21:14.244754 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-04 00:21:14.244790 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:21:14.871887 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:21:14.872015 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:14.872035 | orchestrator | 2025-05-04 00:21:14.872052 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-04 00:21:14.872081 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:14.973635 | orchestrator | 2025-05-04 00:21:14.973759 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-04 00:21:14.973793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-04 00:21:15.612017 | orchestrator | 2025-05-04 00:21:15.612208 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-04 00:21:15.612282 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:21:16.260346 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:16.260481 | orchestrator | 2025-05-04 00:21:16.260502 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-04 00:21:16.260535 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:16.363576 | orchestrator | 2025-05-04 00:21:16.363709 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-04 00:21:16.363747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-04 00:21:16.979606 | orchestrator | 2025-05-04 00:21:16.979738 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-04 00:21:16.979791 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:17.397319 | orchestrator | 2025-05-04 00:21:17.397487 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-04 00:21:17.397541 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:18.594825 | orchestrator | 2025-05-04 00:21:18.594980 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-04 00:21:18.595019 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-04 00:21:19.337698 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-04 00:21:19.337831 | orchestrator | 2025-05-04 00:21:19.337853 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-04 00:21:19.337885 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:19.747009 | orchestrator | 2025-05-04 00:21:19.747141 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-04 00:21:19.747175 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:20.107584 | orchestrator | 2025-05-04 00:21:20.107713 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-04 00:21:20.107789 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:20.151901 | orchestrator | 2025-05-04 00:21:20.152053 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-04 00:21:20.152090 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:21:20.226573 | orchestrator | 2025-05-04 00:21:20.226696 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-04 00:21:20.226732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-04 00:21:20.269465 | orchestrator | 2025-05-04 00:21:20.269578 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-04 00:21:20.269611 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:22.249212 | orchestrator | 2025-05-04 00:21:22.249402 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-04 00:21:22.249440 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-04 00:21:22.973487 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-04 00:21:22.973624 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-04 00:21:22.973645 | orchestrator | 2025-05-04 00:21:22.973662 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-04 00:21:22.973693 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:23.685444 | orchestrator | 2025-05-04 00:21:23.685579 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-04 00:21:23.685616 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:24.400506 | orchestrator | 2025-05-04 00:21:24.400646 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-04 00:21:24.400684 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:24.478819 | orchestrator | 2025-05-04 00:21:24.478946 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-04 00:21:24.478981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-04 00:21:24.529883 | orchestrator | 2025-05-04 00:21:24.530132 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-04 00:21:24.530173 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:25.280439 | orchestrator | 2025-05-04 00:21:25.280579 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-04 00:21:25.280616 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-04 00:21:25.365320 | orchestrator | 2025-05-04 00:21:25.365448 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-04 00:21:25.365480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-04 00:21:26.094589 | orchestrator | 2025-05-04 00:21:26.094754 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-04 00:21:26.094793 | orchestrator | changed: [testbed-manager] 2025-05-04 00:21:26.727983 | orchestrator | 2025-05-04 00:21:26.728118 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-04 00:21:26.728154 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:26.787602 | orchestrator | 2025-05-04 00:21:26.787712 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-04 00:21:26.787747 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:21:26.845956 | orchestrator | 2025-05-04 00:21:26.846127 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-04 00:21:26.846163 | orchestrator | ok: [testbed-manager] 2025-05-04 00:21:27.674504 | orchestrator | 2025-05-04 00:21:27.674653 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-04 00:21:27.674691 | orchestrator | changed: [testbed-manager] 2025-05-04 00:22:10.315912 | orchestrator | 2025-05-04 00:22:10.316071 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-04 00:22:10.316112 | orchestrator | changed: [testbed-manager] 2025-05-04 00:22:10.980084 | orchestrator | 2025-05-04 00:22:10.980215 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-04 00:22:10.980288 | orchestrator | ok: [testbed-manager] 2025-05-04 00:22:13.522681 | orchestrator | 2025-05-04 00:22:13.522817 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-04 00:22:13.522853 | orchestrator | changed: [testbed-manager] 2025-05-04 00:22:13.578329 | orchestrator | 2025-05-04 00:22:13.578457 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-04 00:22:13.578489 | orchestrator | ok: [testbed-manager] 2025-05-04 00:22:13.641825 | orchestrator | 2025-05-04 00:22:13.641961 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-04 00:22:13.641981 | orchestrator | 2025-05-04 00:22:13.641996 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-04 00:22:13.642095 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:23:13.706597 | orchestrator | 2025-05-04 00:23:13.706780 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-04 00:23:13.706843 | orchestrator | Pausing for 60 seconds 2025-05-04 00:23:19.197884 | orchestrator | changed: [testbed-manager] 2025-05-04 00:23:19.198146 | orchestrator | 2025-05-04 00:23:19.198174 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-04 00:23:19.198207 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:00.824486 | orchestrator | 2025-05-04 00:24:00.824637 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-04 00:24:00.824676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-04 00:24:06.657453 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-04 00:24:06.657599 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:06.657621 | orchestrator | 2025-05-04 00:24:06.657650 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-04 00:24:06.657684 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:06.759912 | orchestrator | 2025-05-04 00:24:06.760038 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-04 00:24:06.760075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-04 00:24:06.821676 | orchestrator | 2025-05-04 00:24:06.821780 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-04 00:24:06.821797 | orchestrator | 2025-05-04 00:24:06.821812 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-04 00:24:06.821841 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:24:06.951922 | orchestrator | 2025-05-04 00:24:06.952022 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:24:06.952041 | orchestrator | testbed-manager : ok=109 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-04 00:24:06.952055 | orchestrator | 2025-05-04 00:24:06.952085 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-04 00:24:06.957551 | orchestrator | + deactivate 2025-05-04 00:24:06.957580 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-04 00:24:06.957597 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-04 00:24:06.957611 | orchestrator | + export PATH 2025-05-04 00:24:06.957626 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-04 00:24:06.957640 | orchestrator | + '[' -n '' ']' 2025-05-04 00:24:06.957654 | orchestrator | + hash -r 2025-05-04 00:24:06.957668 | orchestrator | + '[' -n '' ']' 2025-05-04 00:24:06.957682 | orchestrator | + unset VIRTUAL_ENV 2025-05-04 00:24:06.957696 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-04 00:24:06.957710 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-04 00:24:06.957725 | orchestrator | + unset -f deactivate 2025-05-04 00:24:06.957740 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-04 00:24:06.957761 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-04 00:24:06.958559 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-04 00:24:06.958583 | orchestrator | + local max_attempts=60 2025-05-04 00:24:06.958598 | orchestrator | + local name=ceph-ansible 2025-05-04 00:24:06.958613 | orchestrator | + local attempt_num=1 2025-05-04 00:24:06.958632 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-04 00:24:06.990321 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-04 00:24:06.991222 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-04 00:24:06.991254 | orchestrator | + local max_attempts=60 2025-05-04 00:24:06.991269 | orchestrator | + local name=kolla-ansible 2025-05-04 00:24:06.991284 | orchestrator | + local attempt_num=1 2025-05-04 00:24:06.991304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-04 00:24:07.025520 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-04 00:24:07.026138 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-04 00:24:07.026170 | orchestrator | + local max_attempts=60 2025-05-04 00:24:07.026187 | orchestrator | + local name=osism-ansible 2025-05-04 00:24:07.026204 | orchestrator | + local attempt_num=1 2025-05-04 00:24:07.026227 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-04 00:24:07.058311 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-04 00:24:07.755555 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-04 00:24:07.755681 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-04 00:24:07.755718 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-04 00:24:07.816291 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-04 00:24:08.028248 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-04 00:24:08.028372 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-04 00:24:08.028452 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-04 00:24:08.037994 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038121 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038139 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-04 00:24:08.038175 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-04 00:24:08.038191 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038209 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038223 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038238 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-04 00:24:08.038252 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038266 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-04 00:24:08.038280 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038294 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038339 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-04 00:24:08.038354 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038368 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038383 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038436 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-04 00:24:08.038461 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-04 00:24:08.189308 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-04 00:24:08.196726 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-04 00:24:08.196772 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-04 00:24:08.196788 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-04 00:24:08.196804 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-04 00:24:08.196827 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-04 00:24:08.251746 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-04 00:24:08.256869 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-04 00:24:08.256906 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-04 00:24:09.827169 | orchestrator | 2025-05-04 00:24:09 | INFO  | Task c11a7bd2-40f0-4e8d-9ff0-5f1e1e03aa1b (resolvconf) was prepared for execution. 2025-05-04 00:24:12.742180 | orchestrator | 2025-05-04 00:24:09 | INFO  | It takes a moment until task c11a7bd2-40f0-4e8d-9ff0-5f1e1e03aa1b (resolvconf) has been started and output is visible here. 2025-05-04 00:24:12.742343 | orchestrator | 2025-05-04 00:24:12.742641 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-04 00:24:12.743081 | orchestrator | 2025-05-04 00:24:12.745500 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:24:12.745725 | orchestrator | Sunday 04 May 2025 00:24:12 +0000 (0:00:00.090) 0:00:00.090 ************ 2025-05-04 00:24:16.733911 | orchestrator | ok: [testbed-manager] 2025-05-04 00:24:16.736646 | orchestrator | 2025-05-04 00:24:16.736700 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-04 00:24:16.801760 | orchestrator | Sunday 04 May 2025 00:24:16 +0000 (0:00:03.995) 0:00:04.086 ************ 2025-05-04 00:24:16.801919 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:24:16.802088 | orchestrator | 2025-05-04 00:24:16.802868 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-04 00:24:16.802903 | orchestrator | Sunday 04 May 2025 00:24:16 +0000 (0:00:00.069) 0:00:04.156 ************ 2025-05-04 00:24:16.892908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-04 00:24:16.894254 | orchestrator | 2025-05-04 00:24:16.894950 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-04 00:24:16.895025 | orchestrator | Sunday 04 May 2025 00:24:16 +0000 (0:00:00.090) 0:00:04.246 ************ 2025-05-04 00:24:16.994104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-04 00:24:16.994462 | orchestrator | 2025-05-04 00:24:16.994628 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-04 00:24:16.994670 | orchestrator | Sunday 04 May 2025 00:24:16 +0000 (0:00:00.102) 0:00:04.348 ************ 2025-05-04 00:24:18.056017 | orchestrator | ok: [testbed-manager] 2025-05-04 00:24:18.057323 | orchestrator | 2025-05-04 00:24:18.057372 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-04 00:24:18.058173 | orchestrator | Sunday 04 May 2025 00:24:18 +0000 (0:00:01.059) 0:00:05.407 ************ 2025-05-04 00:24:18.119589 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:24:18.120450 | orchestrator | 2025-05-04 00:24:18.120661 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-04 00:24:18.121323 | orchestrator | Sunday 04 May 2025 00:24:18 +0000 (0:00:00.063) 0:00:05.471 ************ 2025-05-04 00:24:18.604526 | orchestrator | ok: [testbed-manager] 2025-05-04 00:24:18.666771 | orchestrator | 2025-05-04 00:24:18.666888 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-04 00:24:18.666901 | orchestrator | Sunday 04 May 2025 00:24:18 +0000 (0:00:00.484) 0:00:05.955 ************ 2025-05-04 00:24:18.666925 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:24:18.667160 | orchestrator | 2025-05-04 00:24:18.667178 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-04 00:24:18.667193 | orchestrator | Sunday 04 May 2025 00:24:18 +0000 (0:00:00.063) 0:00:06.019 ************ 2025-05-04 00:24:19.209774 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:19.210486 | orchestrator | 2025-05-04 00:24:19.210533 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-04 00:24:19.211381 | orchestrator | Sunday 04 May 2025 00:24:19 +0000 (0:00:00.542) 0:00:06.561 ************ 2025-05-04 00:24:20.285259 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:20.286310 | orchestrator | 2025-05-04 00:24:20.286887 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-04 00:24:20.287297 | orchestrator | Sunday 04 May 2025 00:24:20 +0000 (0:00:01.076) 0:00:07.637 ************ 2025-05-04 00:24:21.247814 | orchestrator | ok: [testbed-manager] 2025-05-04 00:24:21.248455 | orchestrator | 2025-05-04 00:24:21.249090 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-04 00:24:21.249819 | orchestrator | Sunday 04 May 2025 00:24:21 +0000 (0:00:00.962) 0:00:08.600 ************ 2025-05-04 00:24:21.329847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-04 00:24:21.331207 | orchestrator | 2025-05-04 00:24:21.331553 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-04 00:24:21.332472 | orchestrator | Sunday 04 May 2025 00:24:21 +0000 (0:00:00.083) 0:00:08.684 ************ 2025-05-04 00:24:22.433139 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:22.433603 | orchestrator | 2025-05-04 00:24:22.434276 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:24:22.434791 | orchestrator | 2025-05-04 00:24:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:24:22.435835 | orchestrator | 2025-05-04 00:24:22 | INFO  | Please wait and do not abort execution. 2025-05-04 00:24:22.435868 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:24:22.436324 | orchestrator | 2025-05-04 00:24:22.436883 | orchestrator | Sunday 04 May 2025 00:24:22 +0000 (0:00:01.101) 0:00:09.786 ************ 2025-05-04 00:24:22.437261 | orchestrator | =============================================================================== 2025-05-04 00:24:22.437670 | orchestrator | Gathering Facts --------------------------------------------------------- 4.00s 2025-05-04 00:24:22.438093 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2025-05-04 00:24:22.438567 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-05-04 00:24:22.439346 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2025-05-04 00:24:22.440328 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-05-04 00:24:22.440646 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-05-04 00:24:22.441072 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-05-04 00:24:22.441431 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2025-05-04 00:24:22.441792 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-04 00:24:22.442167 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-04 00:24:22.442469 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-05-04 00:24:22.443192 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2025-05-04 00:24:22.443372 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-04 00:24:22.796870 | orchestrator | + osism apply sshconfig 2025-05-04 00:24:24.208927 | orchestrator | 2025-05-04 00:24:24 | INFO  | Task a689aead-edff-40e2-870c-1ebd5f36a7ca (sshconfig) was prepared for execution. 2025-05-04 00:24:27.179208 | orchestrator | 2025-05-04 00:24:24 | INFO  | It takes a moment until task a689aead-edff-40e2-870c-1ebd5f36a7ca (sshconfig) has been started and output is visible here. 2025-05-04 00:24:27.179355 | orchestrator | 2025-05-04 00:24:27.181448 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-04 00:24:27.181492 | orchestrator | 2025-05-04 00:24:27.183011 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-04 00:24:27.183472 | orchestrator | Sunday 04 May 2025 00:24:27 +0000 (0:00:00.110) 0:00:00.111 ************ 2025-05-04 00:24:27.756776 | orchestrator | ok: [testbed-manager] 2025-05-04 00:24:27.757672 | orchestrator | 2025-05-04 00:24:27.757721 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-04 00:24:27.757747 | orchestrator | Sunday 04 May 2025 00:24:27 +0000 (0:00:00.576) 0:00:00.687 ************ 2025-05-04 00:24:28.238130 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:28.238252 | orchestrator | 2025-05-04 00:24:28.238872 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-04 00:24:28.239497 | orchestrator | Sunday 04 May 2025 00:24:28 +0000 (0:00:00.483) 0:00:01.170 ************ 2025-05-04 00:24:33.801773 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-04 00:24:33.802432 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-04 00:24:33.802483 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-04 00:24:33.802868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-04 00:24:33.803677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-04 00:24:33.805774 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-04 00:24:33.806120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-04 00:24:33.806691 | orchestrator | 2025-05-04 00:24:33.807346 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-04 00:24:33.807884 | orchestrator | Sunday 04 May 2025 00:24:33 +0000 (0:00:05.561) 0:00:06.732 ************ 2025-05-04 00:24:33.869915 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:24:33.870545 | orchestrator | 2025-05-04 00:24:33.871339 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-04 00:24:33.872587 | orchestrator | Sunday 04 May 2025 00:24:33 +0000 (0:00:00.071) 0:00:06.803 ************ 2025-05-04 00:24:34.428735 | orchestrator | changed: [testbed-manager] 2025-05-04 00:24:34.429447 | orchestrator | 2025-05-04 00:24:34.430761 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:24:34.431487 | orchestrator | 2025-05-04 00:24:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:24:34.431986 | orchestrator | 2025-05-04 00:24:34 | INFO  | Please wait and do not abort execution. 2025-05-04 00:24:34.432987 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:24:34.433476 | orchestrator | 2025-05-04 00:24:34.434090 | orchestrator | Sunday 04 May 2025 00:24:34 +0000 (0:00:00.558) 0:00:07.362 ************ 2025-05-04 00:24:34.434753 | orchestrator | =============================================================================== 2025-05-04 00:24:34.435366 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.56s 2025-05-04 00:24:34.436490 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-05-04 00:24:34.437322 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-05-04 00:24:34.437569 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-05-04 00:24:34.438283 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-04 00:24:34.797857 | orchestrator | + osism apply known-hosts 2025-05-04 00:24:36.174392 | orchestrator | 2025-05-04 00:24:36 | INFO  | Task 3fe1d52e-1693-4625-beab-8912af47d736 (known-hosts) was prepared for execution. 2025-05-04 00:24:39.124316 | orchestrator | 2025-05-04 00:24:36 | INFO  | It takes a moment until task 3fe1d52e-1693-4625-beab-8912af47d736 (known-hosts) has been started and output is visible here. 2025-05-04 00:24:39.124537 | orchestrator | 2025-05-04 00:24:39.125251 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-04 00:24:39.127596 | orchestrator | 2025-05-04 00:24:39.128445 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-04 00:24:39.129146 | orchestrator | Sunday 04 May 2025 00:24:39 +0000 (0:00:00.106) 0:00:00.106 ************ 2025-05-04 00:24:45.044484 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-04 00:24:45.045708 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-04 00:24:45.046710 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-04 00:24:45.046745 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-04 00:24:45.047517 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-04 00:24:45.047936 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-04 00:24:45.048620 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-04 00:24:45.049257 | orchestrator | 2025-05-04 00:24:45.051309 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-04 00:24:45.051866 | orchestrator | Sunday 04 May 2025 00:24:45 +0000 (0:00:05.921) 0:00:06.028 ************ 2025-05-04 00:24:45.217769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-04 00:24:45.219772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-04 00:24:45.219814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-04 00:24:45.219872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-04 00:24:45.220609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-04 00:24:45.222579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-04 00:24:45.223915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-04 00:24:45.223976 | orchestrator | 2025-05-04 00:24:45.224788 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:45.225273 | orchestrator | Sunday 04 May 2025 00:24:45 +0000 (0:00:00.174) 0:00:06.202 ************ 2025-05-04 00:24:46.368994 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEre9LjgHUraqxjFu0Bwm83+jcn9nVxANnNnniNhJaasYq7YpSg0a+49EOKKByAkY8pD9aP+w20MUe2IMgCzMWg=) 2025-05-04 00:24:46.369365 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLEmACPOmaspk7be762C6+Zq8ZJYZJUO9iptGP9YWpzXWAAOQZDo9sqAmtl8sWnxH9fbUQIF+tBS43if+LS4lyavAGQz7A2oJKS+wq0iYuiZRt72rzaU9nVxpVAOF4A6Jg1ov21QtvntVQ1XsowZ18FXaRonsuVYIFCeA8KN7W2ffKBr9QZEs1DJJhFCF5Ix/kRBNK8igaT0Ok2VaNV9UQdfzwNgg9JgOMiQENuMrBzTDLSk6OJfMcfinWe18LlGvAtiFJZoXfbkka5eBbx4n13FnVnbxwizShzHZUrm7fmfOt3Rnue+x7L/pLJKkULTjprNXRSMf4XNifz3f39XkiMUsonP9FqLfnPt1o16qPyRo46pYhKWJr1vzbzptNsXV9OdI09u2GCRiRf41jQ6eX9jZ9Z7FE4w4UQKQEk6Qm8w7TO+UqSJj4bU+GjqS8Lu79A5cZKjmKPCP5p9EAioU09JVU4WWxffCXijhTvD+DxAYNTSRxpBE4+8YKfyX+/Gk=) 2025-05-04 00:24:46.369991 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGiRGpyAzKpOpzbAnJp5HFxfNj/QgFm/U4FZepl04ME6) 2025-05-04 00:24:46.370945 | orchestrator | 2025-05-04 00:24:46.371156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:46.371679 | orchestrator | Sunday 04 May 2025 00:24:46 +0000 (0:00:01.151) 0:00:07.354 ************ 2025-05-04 00:24:47.364576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDn+PO5lTLhVglQ024YfaWVHdnutDL/7htOxt9o21o2Vb4mN1oD15OZlyn2Q79pA/Lqh5lqRQM9KiGOxboD3h2Fa4UC3ZrfWSn1D+K3BEmNbU7LlgEfEe0uN90Oq5K/3qqVi4k44jdgC6H/ZWb5jHEAGoLpTI4s/QCBI1KBpiequrmgtnNiEN3tGdjmKV5VUedveaCbK1+GQYKdsokF1k7PygaXsftu44OUHfzrWcsFxiPzr1SzkNQ1awF/l1nwp7ZSnlPtgIBosRpGrz+2cFHykZCgpc3/P4HExk7tlA7frTaF3Q4U2h9rDrZ3iuPVOo6OdAoT895uPjycsgEnIh4tKYOmB+NRJBpsUDIdBJb6+nzPwTneeY3eu2Guq+0PgmKhg5rRXLwqrKCVB5A5jZ5wvuM1mNwJ4OB3MWu/AiQtFsZifVBgsOM6gdiMyrXtxPfpLvbAorVQ2BPax8TQI8ASEts9mWUCvHmhVWi475BOhgK3y9rdaKh13bPCjVI5hyM=) 2025-05-04 00:24:47.366524 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDzJU6gHLsVIZPk4yGKNLCBJYIellR83iEy2G0ZhhUzb) 2025-05-04 00:24:47.366585 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKkNCKiiBc1gqn1uYzaVoP46whZtU5561htKEo00icr7T8eh6eRXpmZao/23YgBlcX2gETlxX49I7vLNcE5kzb0=) 2025-05-04 00:24:47.366743 | orchestrator | 2025-05-04 00:24:47.367574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:47.368468 | orchestrator | Sunday 04 May 2025 00:24:47 +0000 (0:00:00.995) 0:00:08.349 ************ 2025-05-04 00:24:48.391739 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqPTFKJVU70NgZAsa+THQ3Fg7UnPv1I1kO7QXF9H/bEceHZzK8YGTefbvLPbZDw1xkMApc3+ZJDct/ocaGkmC/YVSlqnzLt3OYbVSjj0+Asp4mqR6sPsaObxI/TpMCIUczhABc45oO3Ik2rmwhB7R+N7rB9NBwMaIX117bmBB3RdzELtkAtO9zxEi8YXGFW189jgc6Vstcz9Lno+MeYVC5xeo5rcL4RzCIaFt58Q72BemW4NGuyzW+b6o7rB6lO+YpHJdJAx9vTkuCMhgpGK4UlahSg2IvafxKm2+RYsRtjxDPt1uldS4MRxjUTBakQMwubqWCaUYgb94wXUS7c376zXAEloLDjRS3wfjFLxHNPU08CS6h5Vl4WV5KJ+3SpdI+ZOxbPgdIbgdS3DFfxBwgR2SxE0avU/dCBUe2W455HjvnqWDrcAVFq4ON3sfN6Am7gyI2cgp7IirTcB6o5y3QSRvLlP1wiyvNqjD8MM+M6nkTqOHtQHty7OXa46TPqJU=) 2025-05-04 00:24:48.392637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII8Kyj1seaURRnhoZCkNAD2w+MdkOWL66gB6CShineek) 2025-05-04 00:24:48.394270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPcUebnkXJIf9BnsgDuogNmwbBvTxAhjIoVfEahj304Y5IGzRWmXiaD9AMXXSo/R5wIyiPECxoqG8R98UkEW3Cc=) 2025-05-04 00:24:48.395359 | orchestrator | 2025-05-04 00:24:48.396096 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:48.396770 | orchestrator | Sunday 04 May 2025 00:24:48 +0000 (0:00:01.027) 0:00:09.376 ************ 2025-05-04 00:24:49.438246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1vzXVNX2WdX5cZRc3/nYxNLJQKGyqDv1uH8kA1ViN8cpvFtbxN20hcSMLqW1MhX887s1nTS7116iszU7lljz0smjjg4EifonMhS1+oFES5NS9YaG/Lhy7CHsPqv2Gsl3sDnaKEDKLacIY0SGd7M2FKt1Hn8HDXGg7D5O7EAsEqrz/8ahc/yRV9uXTGedcVhwuJcaRQwakOmrJ24B/1Rtpoetp5k+qkws0rE/M27BLRDWwMHuxlMvzMnVuINugZM0pq5M8uElmsDKAL8FRBgUBATBfuu65KEHhkIv5RZ5NvCbPaMCCrEGbCGJMz6SUlWMV+MowNxQTSRYwMVPqsF2IczCMtt8T62uPp1kfggvjMSLtPT6MMsYsKJHAYk5BECM0yW2FgnO4uFdlXavZuEGKJ3q8+hqHpKW2VgvPNm8YNUMaH94I8WNfMONx1mMOqiCswyvJAaUn/05bfNamuoxgQWzJhD3IFd7hz1r5HcMXx5qIej0Z4NGTo0EcOnlHLrk=) 2025-05-04 00:24:49.438588 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBa3OF8veuAmi2DUlUVAhgxK9fUXQI/Ji0pRK6/wYEzLZc74RUUen/eGlcVW0rHkX5DBsUWk0yRaiNM5tD3V2Nk=) 2025-05-04 00:24:49.438614 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMK90wI6m+YUuPXZrEBODSy/IYkD3FETzPzIGOy5rKQO) 2025-05-04 00:24:49.439485 | orchestrator | 2025-05-04 00:24:49.439909 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:49.439956 | orchestrator | Sunday 04 May 2025 00:24:49 +0000 (0:00:01.046) 0:00:10.423 ************ 2025-05-04 00:24:50.449709 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/+z0QkdPQfM0gnY3563ERHCHkeFuLICRZarwKo3XC9jJadwQ97MuSkmVPagrneQhthcDu2aMbTMtpKH9kESq4H4466xSp4c2EMH3KBxEC5jSE4KhQZAfhg00OVQVreCKOqbW4vFaZL7xzv7Tfir1NBymWhDRv6m4DZC+8kaR2GsV1ovZRgdjUcJeShI6VGrBrsZvxyfM4mPVWNRbm96rDqQvcK1Y7IuIwvKL9ntTCr8K5FjBKBQsS2BnVf2PkPJ64jXb1HK2i9RK3n4rOXZPbN22WtB4+m7nfyn/aFFhCEBwMGD0b4jXHR9S/r/7cmc4T+18DqnVNn6IPMYZgkT1vwa8jm/mZ8LBFIgtJ9txQV9iGmqlr2hTZ+qzVNvlfCBGdPjNUThdwSdC2M6eks8oEUrkUovaXgQ5lVatFCmrr02RwlHnLh8K/IlsrRG+y45upJIqhPcLsEgJyZrKil1QPZ63BAHnmJMPn9Aaf9a75O8a+imyiFsqYUKTrblzhEFM=) 2025-05-04 00:24:50.450779 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBISIbYjkCy+QP6AtWEg5g+xZDCmjLRhz6stuzJ7YOH1VMfTT41KFAa7E5BcP+FdvIL4WQ6fkQz1eaQJUzFVqPLw=) 2025-05-04 00:24:50.450828 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAWHXuE5g9Q0ABihwd/pAIGWLcWdFbfPV0Xf+ZHL5pZ/) 2025-05-04 00:24:50.450855 | orchestrator | 2025-05-04 00:24:50.451797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:50.452144 | orchestrator | Sunday 04 May 2025 00:24:50 +0000 (0:00:01.009) 0:00:11.433 ************ 2025-05-04 00:24:51.480501 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHy0o6mBA8z/YrPD8x5V3EfbnDtoTiE0/MIwXu5Kt1Un) 2025-05-04 00:24:51.481112 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZIkPWO28vvz0xqRURGbM2K0utbdg0/94nJvvNfTqr0Ew1JMj6+tbYKZ3Sjdx5g5ZHjvdwzFXOVBMxd0vavJCJ9uhgE/2LvRwUk9liU+IbdSeaIhn33Yw6d5gWach3n1+GE7V3pJ616c35chxjr7tmfS6ey9RvatBAqOrXbYypSn+4S2LM+ZL6UDnZZMxBSeW2ArPc2nvKI8c7vqOPUXobmOyIMZi13NfV59M3VmCIrNtmd0rmzVHAL2KZ9G6c/ET1SJ57NKZJQviYcvkqzfFCsTk+4IZA1CXFy0mPUccuZv5AQkM4qOPmOW91GPw8g+aa92NFl3eiCFZQoQfwOHmfc3QebmBIsFzMHFtUDj8w7av+ZWcg1qJn5tmIFtNomP4RTc1HUDIGlhR9v10lp5qiDhBpPPEB5S9NQK4mf+JEaTtVV4Y3chD1Vxq/IQIfHeZ/r+RMal0cEkkQzFFAJO6vx8K17qzUOF2fwbSx+vtWTf0YZjURXLXQDESmKgi6JeM=) 2025-05-04 00:24:51.481197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBND/dYEL40c4FUgfGijbETYtPHS2SSmK+YUIXDAXVjXnJ2/5og/8mqZ2+rNA4gMLuOKSxvzQrPidlMRn8qSOtfE=) 2025-05-04 00:24:51.481227 | orchestrator | 2025-05-04 00:24:51.481544 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:51.482535 | orchestrator | Sunday 04 May 2025 00:24:51 +0000 (0:00:01.030) 0:00:12.464 ************ 2025-05-04 00:24:52.526462 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAuXgWjruVSJVF3BXNmi8V2K62F0KCIRI70WErVs2J/SDla3DYrKeu8WPoAuk0DBiVpEtKTzCji6okZhY2EfsMZmUN+InWxRhZfSa96d3qobAXFal+vIJ8JUApnnGVsS/wxvFw7J3ZTzS08IMQ1AzVIqe4yU6UwjQ9r+Ol0KL+U8gtr5wA2veoNmn87SpGkANEYpk8m5B9pToIE+iOZ30GyhwcBY/DTdmXd69SyUIDmMpqaAGnEJuvi6wR3MYjP875bezJe87k6+x/EQtEdow2b6TIJnVfUoH9i7CDjCSQJj5D9UgCI6BoFT6KqPtxiprhCVatRW2LB/8SZfXxf5tak469WJVeQtEVet9Rwj9BOZYhnDHGAFPh3J3zRRXf+9GXA/fck3qwvjE22XaOssLq3KbqnS2Gx4sTA2E8xup2J5xumR8wSGM8HhFGr1Dzo+IJsFv6iFc7X77oqoxQQy3Jm0h55LTizWeJ6QiHtfkYIXollifBc6rUx6SEV31uCOc=) 2025-05-04 00:24:52.527011 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPJgpUhHu2U+ezJXYnH5gk2OKcfkSyLIoC0k4oK7wolZ1af87QF1c3bo2txIcMGeSLhF8QPjquKfF/PJ7cvjU1I=) 2025-05-04 00:24:52.527055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJK44f0TIoApi8dp/3VaVU53399asGG8AW3+UDjL9/Pp) 2025-05-04 00:24:52.527277 | orchestrator | 2025-05-04 00:24:52.527560 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-04 00:24:52.527943 | orchestrator | Sunday 04 May 2025 00:24:52 +0000 (0:00:01.045) 0:00:13.510 ************ 2025-05-04 00:24:57.702513 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-04 00:24:57.705139 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-04 00:24:57.705209 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-04 00:24:57.709148 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-04 00:24:57.709343 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-04 00:24:57.709978 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-04 00:24:57.710065 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-04 00:24:57.710087 | orchestrator | 2025-05-04 00:24:57.710106 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-04 00:24:57.710131 | orchestrator | Sunday 04 May 2025 00:24:57 +0000 (0:00:05.176) 0:00:18.686 ************ 2025-05-04 00:24:57.864270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-04 00:24:57.865287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-04 00:24:57.865385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-04 00:24:57.867766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-04 00:24:57.868448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-04 00:24:57.871689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-04 00:24:57.872193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-04 00:24:57.872995 | orchestrator | 2025-05-04 00:24:57.873635 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:57.874105 | orchestrator | Sunday 04 May 2025 00:24:57 +0000 (0:00:00.163) 0:00:18.850 ************ 2025-05-04 00:24:58.921184 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGiRGpyAzKpOpzbAnJp5HFxfNj/QgFm/U4FZepl04ME6) 2025-05-04 00:24:58.921640 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLEmACPOmaspk7be762C6+Zq8ZJYZJUO9iptGP9YWpzXWAAOQZDo9sqAmtl8sWnxH9fbUQIF+tBS43if+LS4lyavAGQz7A2oJKS+wq0iYuiZRt72rzaU9nVxpVAOF4A6Jg1ov21QtvntVQ1XsowZ18FXaRonsuVYIFCeA8KN7W2ffKBr9QZEs1DJJhFCF5Ix/kRBNK8igaT0Ok2VaNV9UQdfzwNgg9JgOMiQENuMrBzTDLSk6OJfMcfinWe18LlGvAtiFJZoXfbkka5eBbx4n13FnVnbxwizShzHZUrm7fmfOt3Rnue+x7L/pLJKkULTjprNXRSMf4XNifz3f39XkiMUsonP9FqLfnPt1o16qPyRo46pYhKWJr1vzbzptNsXV9OdI09u2GCRiRf41jQ6eX9jZ9Z7FE4w4UQKQEk6Qm8w7TO+UqSJj4bU+GjqS8Lu79A5cZKjmKPCP5p9EAioU09JVU4WWxffCXijhTvD+DxAYNTSRxpBE4+8YKfyX+/Gk=) 2025-05-04 00:24:58.922182 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEre9LjgHUraqxjFu0Bwm83+jcn9nVxANnNnniNhJaasYq7YpSg0a+49EOKKByAkY8pD9aP+w20MUe2IMgCzMWg=) 2025-05-04 00:24:58.922813 | orchestrator | 2025-05-04 00:24:58.923125 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:58.923455 | orchestrator | Sunday 04 May 2025 00:24:58 +0000 (0:00:01.054) 0:00:19.904 ************ 2025-05-04 00:24:59.971122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDzJU6gHLsVIZPk4yGKNLCBJYIellR83iEy2G0ZhhUzb) 2025-05-04 00:24:59.972190 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDn+PO5lTLhVglQ024YfaWVHdnutDL/7htOxt9o21o2Vb4mN1oD15OZlyn2Q79pA/Lqh5lqRQM9KiGOxboD3h2Fa4UC3ZrfWSn1D+K3BEmNbU7LlgEfEe0uN90Oq5K/3qqVi4k44jdgC6H/ZWb5jHEAGoLpTI4s/QCBI1KBpiequrmgtnNiEN3tGdjmKV5VUedveaCbK1+GQYKdsokF1k7PygaXsftu44OUHfzrWcsFxiPzr1SzkNQ1awF/l1nwp7ZSnlPtgIBosRpGrz+2cFHykZCgpc3/P4HExk7tlA7frTaF3Q4U2h9rDrZ3iuPVOo6OdAoT895uPjycsgEnIh4tKYOmB+NRJBpsUDIdBJb6+nzPwTneeY3eu2Guq+0PgmKhg5rRXLwqrKCVB5A5jZ5wvuM1mNwJ4OB3MWu/AiQtFsZifVBgsOM6gdiMyrXtxPfpLvbAorVQ2BPax8TQI8ASEts9mWUCvHmhVWi475BOhgK3y9rdaKh13bPCjVI5hyM=) 2025-05-04 00:24:59.972269 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKkNCKiiBc1gqn1uYzaVoP46whZtU5561htKEo00icr7T8eh6eRXpmZao/23YgBlcX2gETlxX49I7vLNcE5kzb0=) 2025-05-04 00:24:59.973308 | orchestrator | 2025-05-04 00:24:59.973911 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:24:59.974191 | orchestrator | Sunday 04 May 2025 00:24:59 +0000 (0:00:01.051) 0:00:20.955 ************ 2025-05-04 00:25:01.021953 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII8Kyj1seaURRnhoZCkNAD2w+MdkOWL66gB6CShineek) 2025-05-04 00:25:01.022278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqPTFKJVU70NgZAsa+THQ3Fg7UnPv1I1kO7QXF9H/bEceHZzK8YGTefbvLPbZDw1xkMApc3+ZJDct/ocaGkmC/YVSlqnzLt3OYbVSjj0+Asp4mqR6sPsaObxI/TpMCIUczhABc45oO3Ik2rmwhB7R+N7rB9NBwMaIX117bmBB3RdzELtkAtO9zxEi8YXGFW189jgc6Vstcz9Lno+MeYVC5xeo5rcL4RzCIaFt58Q72BemW4NGuyzW+b6o7rB6lO+YpHJdJAx9vTkuCMhgpGK4UlahSg2IvafxKm2+RYsRtjxDPt1uldS4MRxjUTBakQMwubqWCaUYgb94wXUS7c376zXAEloLDjRS3wfjFLxHNPU08CS6h5Vl4WV5KJ+3SpdI+ZOxbPgdIbgdS3DFfxBwgR2SxE0avU/dCBUe2W455HjvnqWDrcAVFq4ON3sfN6Am7gyI2cgp7IirTcB6o5y3QSRvLlP1wiyvNqjD8MM+M6nkTqOHtQHty7OXa46TPqJU=) 2025-05-04 00:25:01.022926 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPcUebnkXJIf9BnsgDuogNmwbBvTxAhjIoVfEahj304Y5IGzRWmXiaD9AMXXSo/R5wIyiPECxoqG8R98UkEW3Cc=) 2025-05-04 00:25:01.023391 | orchestrator | 2025-05-04 00:25:01.024271 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:25:01.024577 | orchestrator | Sunday 04 May 2025 00:25:01 +0000 (0:00:01.050) 0:00:22.006 ************ 2025-05-04 00:25:02.055485 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1vzXVNX2WdX5cZRc3/nYxNLJQKGyqDv1uH8kA1ViN8cpvFtbxN20hcSMLqW1MhX887s1nTS7116iszU7lljz0smjjg4EifonMhS1+oFES5NS9YaG/Lhy7CHsPqv2Gsl3sDnaKEDKLacIY0SGd7M2FKt1Hn8HDXGg7D5O7EAsEqrz/8ahc/yRV9uXTGedcVhwuJcaRQwakOmrJ24B/1Rtpoetp5k+qkws0rE/M27BLRDWwMHuxlMvzMnVuINugZM0pq5M8uElmsDKAL8FRBgUBATBfuu65KEHhkIv5RZ5NvCbPaMCCrEGbCGJMz6SUlWMV+MowNxQTSRYwMVPqsF2IczCMtt8T62uPp1kfggvjMSLtPT6MMsYsKJHAYk5BECM0yW2FgnO4uFdlXavZuEGKJ3q8+hqHpKW2VgvPNm8YNUMaH94I8WNfMONx1mMOqiCswyvJAaUn/05bfNamuoxgQWzJhD3IFd7hz1r5HcMXx5qIej0Z4NGTo0EcOnlHLrk=) 2025-05-04 00:25:02.056060 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBa3OF8veuAmi2DUlUVAhgxK9fUXQI/Ji0pRK6/wYEzLZc74RUUen/eGlcVW0rHkX5DBsUWk0yRaiNM5tD3V2Nk=) 2025-05-04 00:25:02.057208 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMK90wI6m+YUuPXZrEBODSy/IYkD3FETzPzIGOy5rKQO) 2025-05-04 00:25:02.057468 | orchestrator | 2025-05-04 00:25:02.057515 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:25:03.099011 | orchestrator | Sunday 04 May 2025 00:25:02 +0000 (0:00:01.034) 0:00:23.040 ************ 2025-05-04 00:25:03.099208 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBISIbYjkCy+QP6AtWEg5g+xZDCmjLRhz6stuzJ7YOH1VMfTT41KFAa7E5BcP+FdvIL4WQ6fkQz1eaQJUzFVqPLw=) 2025-05-04 00:25:03.100345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/+z0QkdPQfM0gnY3563ERHCHkeFuLICRZarwKo3XC9jJadwQ97MuSkmVPagrneQhthcDu2aMbTMtpKH9kESq4H4466xSp4c2EMH3KBxEC5jSE4KhQZAfhg00OVQVreCKOqbW4vFaZL7xzv7Tfir1NBymWhDRv6m4DZC+8kaR2GsV1ovZRgdjUcJeShI6VGrBrsZvxyfM4mPVWNRbm96rDqQvcK1Y7IuIwvKL9ntTCr8K5FjBKBQsS2BnVf2PkPJ64jXb1HK2i9RK3n4rOXZPbN22WtB4+m7nfyn/aFFhCEBwMGD0b4jXHR9S/r/7cmc4T+18DqnVNn6IPMYZgkT1vwa8jm/mZ8LBFIgtJ9txQV9iGmqlr2hTZ+qzVNvlfCBGdPjNUThdwSdC2M6eks8oEUrkUovaXgQ5lVatFCmrr02RwlHnLh8K/IlsrRG+y45upJIqhPcLsEgJyZrKil1QPZ63BAHnmJMPn9Aaf9a75O8a+imyiFsqYUKTrblzhEFM=) 2025-05-04 00:25:03.100468 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAWHXuE5g9Q0ABihwd/pAIGWLcWdFbfPV0Xf+ZHL5pZ/) 2025-05-04 00:25:03.101310 | orchestrator | 2025-05-04 00:25:03.101798 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:25:03.102338 | orchestrator | Sunday 04 May 2025 00:25:03 +0000 (0:00:01.042) 0:00:24.083 ************ 2025-05-04 00:25:04.151165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZIkPWO28vvz0xqRURGbM2K0utbdg0/94nJvvNfTqr0Ew1JMj6+tbYKZ3Sjdx5g5ZHjvdwzFXOVBMxd0vavJCJ9uhgE/2LvRwUk9liU+IbdSeaIhn33Yw6d5gWach3n1+GE7V3pJ616c35chxjr7tmfS6ey9RvatBAqOrXbYypSn+4S2LM+ZL6UDnZZMxBSeW2ArPc2nvKI8c7vqOPUXobmOyIMZi13NfV59M3VmCIrNtmd0rmzVHAL2KZ9G6c/ET1SJ57NKZJQviYcvkqzfFCsTk+4IZA1CXFy0mPUccuZv5AQkM4qOPmOW91GPw8g+aa92NFl3eiCFZQoQfwOHmfc3QebmBIsFzMHFtUDj8w7av+ZWcg1qJn5tmIFtNomP4RTc1HUDIGlhR9v10lp5qiDhBpPPEB5S9NQK4mf+JEaTtVV4Y3chD1Vxq/IQIfHeZ/r+RMal0cEkkQzFFAJO6vx8K17qzUOF2fwbSx+vtWTf0YZjURXLXQDESmKgi6JeM=) 2025-05-04 00:25:04.152165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBND/dYEL40c4FUgfGijbETYtPHS2SSmK+YUIXDAXVjXnJ2/5og/8mqZ2+rNA4gMLuOKSxvzQrPidlMRn8qSOtfE=) 2025-05-04 00:25:04.152240 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHy0o6mBA8z/YrPD8x5V3EfbnDtoTiE0/MIwXu5Kt1Un) 2025-05-04 00:25:04.152267 | orchestrator | 2025-05-04 00:25:04.152466 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-04 00:25:04.152497 | orchestrator | Sunday 04 May 2025 00:25:04 +0000 (0:00:01.052) 0:00:25.136 ************ 2025-05-04 00:25:05.251124 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJK44f0TIoApi8dp/3VaVU53399asGG8AW3+UDjL9/Pp) 2025-05-04 00:25:05.251997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAuXgWjruVSJVF3BXNmi8V2K62F0KCIRI70WErVs2J/SDla3DYrKeu8WPoAuk0DBiVpEtKTzCji6okZhY2EfsMZmUN+InWxRhZfSa96d3qobAXFal+vIJ8JUApnnGVsS/wxvFw7J3ZTzS08IMQ1AzVIqe4yU6UwjQ9r+Ol0KL+U8gtr5wA2veoNmn87SpGkANEYpk8m5B9pToIE+iOZ30GyhwcBY/DTdmXd69SyUIDmMpqaAGnEJuvi6wR3MYjP875bezJe87k6+x/EQtEdow2b6TIJnVfUoH9i7CDjCSQJj5D9UgCI6BoFT6KqPtxiprhCVatRW2LB/8SZfXxf5tak469WJVeQtEVet9Rwj9BOZYhnDHGAFPh3J3zRRXf+9GXA/fck3qwvjE22XaOssLq3KbqnS2Gx4sTA2E8xup2J5xumR8wSGM8HhFGr1Dzo+IJsFv6iFc7X77oqoxQQy3Jm0h55LTizWeJ6QiHtfkYIXollifBc6rUx6SEV31uCOc=) 2025-05-04 00:25:05.253044 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPJgpUhHu2U+ezJXYnH5gk2OKcfkSyLIoC0k4oK7wolZ1af87QF1c3bo2txIcMGeSLhF8QPjquKfF/PJ7cvjU1I=) 2025-05-04 00:25:05.254448 | orchestrator | 2025-05-04 00:25:05.255356 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-04 00:25:05.256175 | orchestrator | Sunday 04 May 2025 00:25:05 +0000 (0:00:01.099) 0:00:26.235 ************ 2025-05-04 00:25:05.408577 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-04 00:25:05.409301 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-04 00:25:05.410096 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-04 00:25:05.413126 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-04 00:25:05.413619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-04 00:25:05.414348 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-04 00:25:05.414573 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-04 00:25:05.415239 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:25:05.415662 | orchestrator | 2025-05-04 00:25:05.416358 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-04 00:25:05.416572 | orchestrator | Sunday 04 May 2025 00:25:05 +0000 (0:00:00.158) 0:00:26.393 ************ 2025-05-04 00:25:05.461977 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:25:05.462374 | orchestrator | 2025-05-04 00:25:05.462443 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-04 00:25:05.462469 | orchestrator | Sunday 04 May 2025 00:25:05 +0000 (0:00:00.054) 0:00:26.448 ************ 2025-05-04 00:25:05.515847 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:25:05.516050 | orchestrator | 2025-05-04 00:25:05.516720 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-04 00:25:05.517201 | orchestrator | Sunday 04 May 2025 00:25:05 +0000 (0:00:00.052) 0:00:26.501 ************ 2025-05-04 00:25:06.247219 | orchestrator | changed: [testbed-manager] 2025-05-04 00:25:06.248631 | orchestrator | 2025-05-04 00:25:06.248716 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:25:06.249107 | orchestrator | 2025-05-04 00:25:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:25:06.249140 | orchestrator | 2025-05-04 00:25:06 | INFO  | Please wait and do not abort execution. 2025-05-04 00:25:06.249163 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:25:06.249848 | orchestrator | 2025-05-04 00:25:06.250468 | orchestrator | Sunday 04 May 2025 00:25:06 +0000 (0:00:00.730) 0:00:27.232 ************ 2025-05-04 00:25:06.250909 | orchestrator | =============================================================================== 2025-05-04 00:25:06.251571 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.92s 2025-05-04 00:25:06.252322 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.18s 2025-05-04 00:25:06.252742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-05-04 00:25:06.253200 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-04 00:25:06.253610 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-04 00:25:06.254392 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-04 00:25:06.254837 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-04 00:25:06.255285 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-04 00:25:06.255618 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-04 00:25:06.256007 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-04 00:25:06.256200 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-04 00:25:06.257465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-04 00:25:06.257945 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-04 00:25:06.257993 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-04 00:25:06.258217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-04 00:25:06.258243 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-04 00:25:06.258727 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2025-05-04 00:25:06.258760 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-04 00:25:06.259161 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-05-04 00:25:06.587595 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-05-04 00:25:06.587739 | orchestrator | + osism apply squid 2025-05-04 00:25:07.966918 | orchestrator | 2025-05-04 00:25:07 | INFO  | Task 2451d60e-25eb-49d5-8647-41badc475b83 (squid) was prepared for execution. 2025-05-04 00:25:10.910068 | orchestrator | 2025-05-04 00:25:07 | INFO  | It takes a moment until task 2451d60e-25eb-49d5-8647-41badc475b83 (squid) has been started and output is visible here. 2025-05-04 00:25:10.910227 | orchestrator | 2025-05-04 00:25:10.910632 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-04 00:25:10.912535 | orchestrator | 2025-05-04 00:25:10.913355 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-04 00:25:10.913870 | orchestrator | Sunday 04 May 2025 00:25:10 +0000 (0:00:00.101) 0:00:00.101 ************ 2025-05-04 00:25:10.999354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-04 00:25:11.000684 | orchestrator | 2025-05-04 00:25:11.000957 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-04 00:25:11.000991 | orchestrator | Sunday 04 May 2025 00:25:10 +0000 (0:00:00.090) 0:00:00.192 ************ 2025-05-04 00:25:12.321758 | orchestrator | ok: [testbed-manager] 2025-05-04 00:25:12.322360 | orchestrator | 2025-05-04 00:25:12.323049 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-04 00:25:12.323688 | orchestrator | Sunday 04 May 2025 00:25:12 +0000 (0:00:01.322) 0:00:01.515 ************ 2025-05-04 00:25:13.458650 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-04 00:25:13.459252 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-04 00:25:13.459302 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-04 00:25:13.460045 | orchestrator | 2025-05-04 00:25:13.460931 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-04 00:25:13.461697 | orchestrator | Sunday 04 May 2025 00:25:13 +0000 (0:00:01.135) 0:00:02.650 ************ 2025-05-04 00:25:14.525485 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-04 00:25:14.526569 | orchestrator | 2025-05-04 00:25:14.526802 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-04 00:25:14.884502 | orchestrator | Sunday 04 May 2025 00:25:14 +0000 (0:00:01.066) 0:00:03.717 ************ 2025-05-04 00:25:14.884743 | orchestrator | ok: [testbed-manager] 2025-05-04 00:25:14.885169 | orchestrator | 2025-05-04 00:25:14.885211 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-04 00:25:14.885441 | orchestrator | Sunday 04 May 2025 00:25:14 +0000 (0:00:00.360) 0:00:04.078 ************ 2025-05-04 00:25:15.884875 | orchestrator | changed: [testbed-manager] 2025-05-04 00:25:15.886183 | orchestrator | 2025-05-04 00:25:15.886389 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-04 00:25:15.888261 | orchestrator | Sunday 04 May 2025 00:25:15 +0000 (0:00:00.998) 0:00:05.076 ************ 2025-05-04 00:25:47.569595 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-04 00:25:47.569746 | orchestrator | ok: [testbed-manager] 2025-05-04 00:25:47.569769 | orchestrator | 2025-05-04 00:25:47.569785 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-04 00:25:47.569807 | orchestrator | Sunday 04 May 2025 00:25:47 +0000 (0:00:31.680) 0:00:36.757 ************ 2025-05-04 00:25:59.975605 | orchestrator | changed: [testbed-manager] 2025-05-04 00:27:00.065733 | orchestrator | 2025-05-04 00:27:00.065853 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-04 00:27:00.065865 | orchestrator | Sunday 04 May 2025 00:25:59 +0000 (0:00:12.409) 0:00:49.166 ************ 2025-05-04 00:27:00.065885 | orchestrator | Pausing for 60 seconds 2025-05-04 00:27:00.066810 | orchestrator | changed: [testbed-manager] 2025-05-04 00:27:00.066830 | orchestrator | 2025-05-04 00:27:00.066840 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-04 00:27:00.066853 | orchestrator | Sunday 04 May 2025 00:27:00 +0000 (0:01:00.089) 0:01:49.256 ************ 2025-05-04 00:27:00.124458 | orchestrator | ok: [testbed-manager] 2025-05-04 00:27:00.125017 | orchestrator | 2025-05-04 00:27:00.125058 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-04 00:27:00.125886 | orchestrator | Sunday 04 May 2025 00:27:00 +0000 (0:00:00.062) 0:01:49.318 ************ 2025-05-04 00:27:00.719829 | orchestrator | changed: [testbed-manager] 2025-05-04 00:27:00.720329 | orchestrator | 2025-05-04 00:27:00.720378 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:27:00.721333 | orchestrator | 2025-05-04 00:27:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:27:00.722264 | orchestrator | 2025-05-04 00:27:00 | INFO  | Please wait and do not abort execution. 2025-05-04 00:27:00.722305 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:27:00.722602 | orchestrator | 2025-05-04 00:27:00.723596 | orchestrator | Sunday 04 May 2025 00:27:00 +0000 (0:00:00.595) 0:01:49.913 ************ 2025-05-04 00:27:00.723859 | orchestrator | =============================================================================== 2025-05-04 00:27:00.724325 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-05-04 00:27:00.724779 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.68s 2025-05-04 00:27:00.725186 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.41s 2025-05-04 00:27:00.725504 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.32s 2025-05-04 00:27:00.725977 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2025-05-04 00:27:00.726279 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-05-04 00:27:00.726750 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2025-05-04 00:27:00.727106 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-05-04 00:27:00.727552 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-05-04 00:27:00.727834 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-04 00:27:00.728242 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-04 00:27:01.128079 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-04 00:27:01.131552 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-04 00:27:01.131650 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-04 00:27:01.187748 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-04 00:27:01.192509 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-04 00:27:01.192551 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-04 00:27:01.192577 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-04 00:27:01.197195 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-04 00:27:01.202965 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-04 00:27:02.634917 | orchestrator | 2025-05-04 00:27:02 | INFO  | Task 1f6287aa-36c2-49aa-8750-93a901158057 (operator) was prepared for execution. 2025-05-04 00:27:05.579219 | orchestrator | 2025-05-04 00:27:02 | INFO  | It takes a moment until task 1f6287aa-36c2-49aa-8750-93a901158057 (operator) has been started and output is visible here. 2025-05-04 00:27:05.579492 | orchestrator | 2025-05-04 00:27:05.579587 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-04 00:27:05.580215 | orchestrator | 2025-05-04 00:27:05.581133 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-04 00:27:05.584466 | orchestrator | Sunday 04 May 2025 00:27:05 +0000 (0:00:00.091) 0:00:00.091 ************ 2025-05-04 00:27:09.795894 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:27:09.796870 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:09.799324 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:27:09.799825 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:09.799860 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:09.801006 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:27:09.802115 | orchestrator | 2025-05-04 00:27:09.802658 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-04 00:27:09.803533 | orchestrator | Sunday 04 May 2025 00:27:09 +0000 (0:00:04.218) 0:00:04.310 ************ 2025-05-04 00:27:10.568809 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:27:10.569400 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:10.569737 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:10.569761 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:27:10.569776 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:10.569852 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:27:10.570287 | orchestrator | 2025-05-04 00:27:10.570550 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-04 00:27:10.571178 | orchestrator | 2025-05-04 00:27:10.572043 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-04 00:27:10.572464 | orchestrator | Sunday 04 May 2025 00:27:10 +0000 (0:00:00.770) 0:00:05.080 ************ 2025-05-04 00:27:10.619363 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:27:10.667703 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:27:10.685802 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:27:10.735270 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:10.735653 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:10.736220 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:10.736483 | orchestrator | 2025-05-04 00:27:10.737156 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-04 00:27:10.816497 | orchestrator | Sunday 04 May 2025 00:27:10 +0000 (0:00:00.168) 0:00:05.249 ************ 2025-05-04 00:27:10.816619 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:27:10.833248 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:27:10.885049 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:27:10.885270 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:10.885891 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:10.886333 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:10.886690 | orchestrator | 2025-05-04 00:27:10.887005 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-04 00:27:10.887289 | orchestrator | Sunday 04 May 2025 00:27:10 +0000 (0:00:00.150) 0:00:05.400 ************ 2025-05-04 00:27:11.504918 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:11.505903 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:11.505962 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:11.506165 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:11.509636 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:11.511188 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:11.511578 | orchestrator | 2025-05-04 00:27:11.511611 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-04 00:27:11.511633 | orchestrator | Sunday 04 May 2025 00:27:11 +0000 (0:00:00.617) 0:00:06.017 ************ 2025-05-04 00:27:12.279823 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:12.280209 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:12.280799 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:12.281134 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:12.281835 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:12.282227 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:12.282583 | orchestrator | 2025-05-04 00:27:12.283075 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-04 00:27:12.283575 | orchestrator | Sunday 04 May 2025 00:27:12 +0000 (0:00:00.775) 0:00:06.793 ************ 2025-05-04 00:27:13.426544 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-04 00:27:13.428150 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-04 00:27:13.432701 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-04 00:27:13.432766 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-04 00:27:13.432782 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-04 00:27:13.432811 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-04 00:27:13.433646 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-04 00:27:13.434203 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-04 00:27:13.434924 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-04 00:27:13.435623 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-04 00:27:13.436121 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-04 00:27:13.436656 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-04 00:27:13.437567 | orchestrator | 2025-05-04 00:27:13.437957 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-04 00:27:13.438396 | orchestrator | Sunday 04 May 2025 00:27:13 +0000 (0:00:01.146) 0:00:07.940 ************ 2025-05-04 00:27:14.677835 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:14.680886 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:14.680961 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:14.681264 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:14.681883 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:14.683568 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:14.683897 | orchestrator | 2025-05-04 00:27:14.686662 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-04 00:27:14.687075 | orchestrator | Sunday 04 May 2025 00:27:14 +0000 (0:00:01.249) 0:00:09.189 ************ 2025-05-04 00:27:15.883832 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-04 00:27:15.884184 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-04 00:27:15.887726 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-04 00:27:15.951197 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:27:15.951403 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:27:15.951860 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:27:15.955566 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:27:15.955932 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:27:15.956401 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-04 00:27:15.956925 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-04 00:27:15.957706 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-04 00:27:15.957859 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-04 00:27:15.958211 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-04 00:27:15.958727 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-04 00:27:15.959557 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-04 00:27:15.960000 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:27:15.960894 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:27:15.961554 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:27:15.963511 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:27:15.963835 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:27:15.964244 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-04 00:27:15.964553 | orchestrator | 2025-05-04 00:27:15.965020 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-04 00:27:15.965532 | orchestrator | Sunday 04 May 2025 00:27:15 +0000 (0:00:01.275) 0:00:10.465 ************ 2025-05-04 00:27:16.591693 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:16.592288 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:16.598390 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:16.654950 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:16.655041 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:16.655059 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:16.655075 | orchestrator | 2025-05-04 00:27:16.655091 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-04 00:27:16.655108 | orchestrator | Sunday 04 May 2025 00:27:16 +0000 (0:00:00.640) 0:00:11.106 ************ 2025-05-04 00:27:16.655137 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:27:16.677414 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:27:16.703656 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:27:16.737647 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:16.737962 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:16.737999 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:16.738621 | orchestrator | 2025-05-04 00:27:16.744481 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-04 00:27:16.745009 | orchestrator | Sunday 04 May 2025 00:27:16 +0000 (0:00:00.147) 0:00:11.253 ************ 2025-05-04 00:27:17.513850 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-04 00:27:17.515527 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:17.515709 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 00:27:17.516294 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-04 00:27:17.517857 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:17.517970 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:17.517992 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-04 00:27:17.518008 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:17.518077 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-04 00:27:17.518099 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:17.518384 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-04 00:27:17.518641 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:17.520160 | orchestrator | 2025-05-04 00:27:17.557654 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-04 00:27:17.557787 | orchestrator | Sunday 04 May 2025 00:27:17 +0000 (0:00:00.754) 0:00:12.007 ************ 2025-05-04 00:27:17.557822 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:27:17.576851 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:27:17.621856 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:27:17.651773 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:17.651995 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:17.654696 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:17.655233 | orchestrator | 2025-05-04 00:27:17.656637 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-04 00:27:17.658601 | orchestrator | Sunday 04 May 2025 00:27:17 +0000 (0:00:00.157) 0:00:12.165 ************ 2025-05-04 00:27:17.719945 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:27:17.759955 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:27:17.762178 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:27:17.816880 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:17.817883 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:17.821564 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:17.896031 | orchestrator | 2025-05-04 00:27:17.896145 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-04 00:27:17.896163 | orchestrator | Sunday 04 May 2025 00:27:17 +0000 (0:00:00.166) 0:00:12.331 ************ 2025-05-04 00:27:17.896195 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:27:17.924848 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:27:17.942133 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:27:17.985621 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:17.988654 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:17.988729 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:17.990258 | orchestrator | 2025-05-04 00:27:17.992325 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-04 00:27:17.997691 | orchestrator | Sunday 04 May 2025 00:27:17 +0000 (0:00:00.162) 0:00:12.494 ************ 2025-05-04 00:27:18.648578 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:18.648819 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:18.649285 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:18.650261 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:18.650754 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:18.651223 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:18.651680 | orchestrator | 2025-05-04 00:27:18.652848 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-04 00:27:18.653019 | orchestrator | Sunday 04 May 2025 00:27:18 +0000 (0:00:00.668) 0:00:13.163 ************ 2025-05-04 00:27:18.746063 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:27:18.766373 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:27:18.861066 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:27:18.861753 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:18.862832 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:18.864003 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:18.866671 | orchestrator | 2025-05-04 00:27:18.874409 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:27:18.874523 | orchestrator | 2025-05-04 00:27:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:27:18.874542 | orchestrator | 2025-05-04 00:27:18 | INFO  | Please wait and do not abort execution. 2025-05-04 00:27:18.874566 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:27:18.875460 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:27:18.876154 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:27:18.876562 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:27:18.877113 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:27:18.877852 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:27:18.878449 | orchestrator | 2025-05-04 00:27:18.878858 | orchestrator | Sunday 04 May 2025 00:27:18 +0000 (0:00:00.213) 0:00:13.376 ************ 2025-05-04 00:27:18.879583 | orchestrator | =============================================================================== 2025-05-04 00:27:18.882593 | orchestrator | Gathering Facts --------------------------------------------------------- 4.22s 2025-05-04 00:27:18.882852 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-05-04 00:27:18.882883 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-05-04 00:27:18.882897 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-05-04 00:27:18.882911 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2025-05-04 00:27:18.882925 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-05-04 00:27:18.882939 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2025-05-04 00:27:18.882954 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-05-04 00:27:18.882973 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.64s 2025-05-04 00:27:18.883285 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-04 00:27:18.883651 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-05-04 00:27:18.884149 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-05-04 00:27:18.884577 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-05-04 00:27:18.884909 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-04 00:27:18.885945 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-05-04 00:27:18.887068 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-05-04 00:27:18.887116 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-05-04 00:27:19.280627 | orchestrator | + osism apply --environment custom facts 2025-05-04 00:27:20.773639 | orchestrator | 2025-05-04 00:27:20 | INFO  | Trying to run play facts in environment custom 2025-05-04 00:27:20.830640 | orchestrator | 2025-05-04 00:27:20 | INFO  | Task fe1e43bc-98ad-4034-9533-91bfb7a320ca (facts) was prepared for execution. 2025-05-04 00:27:23.682810 | orchestrator | 2025-05-04 00:27:20 | INFO  | It takes a moment until task fe1e43bc-98ad-4034-9533-91bfb7a320ca (facts) has been started and output is visible here. 2025-05-04 00:27:23.682962 | orchestrator | 2025-05-04 00:27:23.683931 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-04 00:27:23.683961 | orchestrator | 2025-05-04 00:27:23.683977 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-04 00:27:23.683999 | orchestrator | Sunday 04 May 2025 00:27:23 +0000 (0:00:00.074) 0:00:00.074 ************ 2025-05-04 00:27:24.828995 | orchestrator | ok: [testbed-manager] 2025-05-04 00:27:25.890582 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:25.890967 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:25.891010 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:25.895364 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:25.900009 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:25.900065 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:25.900093 | orchestrator | 2025-05-04 00:27:25.900120 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-04 00:27:26.980857 | orchestrator | Sunday 04 May 2025 00:27:25 +0000 (0:00:02.212) 0:00:02.286 ************ 2025-05-04 00:27:26.980991 | orchestrator | ok: [testbed-manager] 2025-05-04 00:27:27.845222 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:27:27.848495 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:27.849179 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:27.849222 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:27.849239 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:27:27.849255 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:27:27.849279 | orchestrator | 2025-05-04 00:27:27.850213 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-04 00:27:27.850684 | orchestrator | 2025-05-04 00:27:27.851206 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-04 00:27:27.851806 | orchestrator | Sunday 04 May 2025 00:27:27 +0000 (0:00:01.956) 0:00:04.242 ************ 2025-05-04 00:27:27.937298 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:27.937782 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:27.937831 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:27.937865 | orchestrator | 2025-05-04 00:27:27.937890 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-04 00:27:27.937925 | orchestrator | Sunday 04 May 2025 00:27:27 +0000 (0:00:00.091) 0:00:04.334 ************ 2025-05-04 00:27:28.046963 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:28.048587 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:28.151007 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:28.151136 | orchestrator | 2025-05-04 00:27:28.151158 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-04 00:27:28.151174 | orchestrator | Sunday 04 May 2025 00:27:28 +0000 (0:00:00.111) 0:00:04.446 ************ 2025-05-04 00:27:28.151205 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:28.151579 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:28.151999 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:28.152732 | orchestrator | 2025-05-04 00:27:28.153168 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-04 00:27:28.153733 | orchestrator | Sunday 04 May 2025 00:27:28 +0000 (0:00:00.103) 0:00:04.549 ************ 2025-05-04 00:27:28.268591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:27:28.269132 | orchestrator | 2025-05-04 00:27:28.269342 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-04 00:27:28.269770 | orchestrator | Sunday 04 May 2025 00:27:28 +0000 (0:00:00.117) 0:00:04.667 ************ 2025-05-04 00:27:28.704174 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:28.704690 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:28.705491 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:28.706511 | orchestrator | 2025-05-04 00:27:28.707361 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-04 00:27:28.709548 | orchestrator | Sunday 04 May 2025 00:27:28 +0000 (0:00:00.434) 0:00:05.101 ************ 2025-05-04 00:27:28.788212 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:28.788659 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:28.789412 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:28.790156 | orchestrator | 2025-05-04 00:27:28.792468 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-04 00:27:28.793158 | orchestrator | Sunday 04 May 2025 00:27:28 +0000 (0:00:00.085) 0:00:05.186 ************ 2025-05-04 00:27:29.764004 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:29.764199 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:29.764972 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:29.765728 | orchestrator | 2025-05-04 00:27:29.766408 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-04 00:27:29.767202 | orchestrator | Sunday 04 May 2025 00:27:29 +0000 (0:00:00.973) 0:00:06.160 ************ 2025-05-04 00:27:30.216702 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:30.217672 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:30.217722 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:30.218116 | orchestrator | 2025-05-04 00:27:30.218155 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-04 00:27:30.218551 | orchestrator | Sunday 04 May 2025 00:27:30 +0000 (0:00:00.453) 0:00:06.614 ************ 2025-05-04 00:27:31.265155 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:31.265485 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:31.266387 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:31.267412 | orchestrator | 2025-05-04 00:27:31.272603 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-04 00:27:44.678355 | orchestrator | Sunday 04 May 2025 00:27:31 +0000 (0:00:01.046) 0:00:07.660 ************ 2025-05-04 00:27:44.678582 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:44.679666 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:44.679694 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:44.679715 | orchestrator | 2025-05-04 00:27:44.680418 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-04 00:27:44.680944 | orchestrator | Sunday 04 May 2025 00:27:44 +0000 (0:00:13.409) 0:00:21.070 ************ 2025-05-04 00:27:44.734337 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:27:44.787300 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:27:44.788239 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:27:44.789420 | orchestrator | 2025-05-04 00:27:44.792516 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-04 00:27:44.793605 | orchestrator | Sunday 04 May 2025 00:27:44 +0000 (0:00:00.113) 0:00:21.184 ************ 2025-05-04 00:27:52.225403 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:27:52.225773 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:27:52.226289 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:27:52.226780 | orchestrator | 2025-05-04 00:27:52.229645 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-04 00:27:52.230064 | orchestrator | Sunday 04 May 2025 00:27:52 +0000 (0:00:07.437) 0:00:28.622 ************ 2025-05-04 00:27:52.667862 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:52.668074 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:52.668716 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:52.670351 | orchestrator | 2025-05-04 00:27:52.671345 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-04 00:27:52.671981 | orchestrator | Sunday 04 May 2025 00:27:52 +0000 (0:00:00.443) 0:00:29.065 ************ 2025-05-04 00:27:56.212939 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-04 00:27:56.213343 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-04 00:27:56.214305 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-04 00:27:56.216650 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-04 00:27:56.217523 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-04 00:27:56.217913 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-04 00:27:56.218911 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-04 00:27:56.219906 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-04 00:27:56.220390 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-04 00:27:56.221511 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-04 00:27:56.221997 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-04 00:27:56.222983 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-04 00:27:56.223528 | orchestrator | 2025-05-04 00:27:56.224171 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-04 00:27:56.224851 | orchestrator | Sunday 04 May 2025 00:27:56 +0000 (0:00:03.543) 0:00:32.609 ************ 2025-05-04 00:27:57.411156 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:27:57.411963 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:27:57.415412 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:27:57.416870 | orchestrator | 2025-05-04 00:27:57.416902 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-04 00:27:57.416955 | orchestrator | 2025-05-04 00:27:57.416978 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-04 00:27:57.417038 | orchestrator | Sunday 04 May 2025 00:27:57 +0000 (0:00:01.197) 0:00:33.806 ************ 2025-05-04 00:27:59.124217 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:03.361619 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:03.362098 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:03.363356 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:03.364170 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:03.364720 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:03.365819 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:03.366489 | orchestrator | 2025-05-04 00:28:03.367386 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:28:03.367810 | orchestrator | 2025-05-04 00:28:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:28:03.368013 | orchestrator | 2025-05-04 00:28:03 | INFO  | Please wait and do not abort execution. 2025-05-04 00:28:03.368970 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:28:03.369880 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:28:03.370419 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:28:03.371140 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:28:03.371425 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:28:03.371927 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:28:03.372253 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:28:03.372634 | orchestrator | 2025-05-04 00:28:03.372949 | orchestrator | Sunday 04 May 2025 00:28:03 +0000 (0:00:05.952) 0:00:39.759 ************ 2025-05-04 00:28:03.373856 | orchestrator | =============================================================================== 2025-05-04 00:28:03.375627 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.41s 2025-05-04 00:28:03.376011 | orchestrator | Install required packages (Debian) -------------------------------------- 7.44s 2025-05-04 00:28:03.376305 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.95s 2025-05-04 00:28:03.376520 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2025-05-04 00:28:03.376836 | orchestrator | Create custom facts directory ------------------------------------------- 2.21s 2025-05-04 00:28:03.377242 | orchestrator | Copy fact file ---------------------------------------------------------- 1.96s 2025-05-04 00:28:03.377494 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.20s 2025-05-04 00:28:03.377815 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-05-04 00:28:03.378165 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2025-05-04 00:28:03.378423 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-04 00:28:03.378661 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-05-04 00:28:03.378965 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-05-04 00:28:03.379275 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-05-04 00:28:03.379543 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-05-04 00:28:03.379777 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.11s 2025-05-04 00:28:03.380108 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.10s 2025-05-04 00:28:03.380357 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2025-05-04 00:28:03.380588 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-05-04 00:28:03.750619 | orchestrator | + osism apply bootstrap 2025-05-04 00:28:05.207648 | orchestrator | 2025-05-04 00:28:05 | INFO  | Task b8a9415e-520c-4bec-8a81-4cd632b2697f (bootstrap) was prepared for execution. 2025-05-04 00:28:08.324988 | orchestrator | 2025-05-04 00:28:05 | INFO  | It takes a moment until task b8a9415e-520c-4bec-8a81-4cd632b2697f (bootstrap) has been started and output is visible here. 2025-05-04 00:28:08.325148 | orchestrator | 2025-05-04 00:28:08.327995 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-04 00:28:08.329883 | orchestrator | 2025-05-04 00:28:08.330138 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-04 00:28:08.330174 | orchestrator | Sunday 04 May 2025 00:28:08 +0000 (0:00:00.106) 0:00:00.106 ************ 2025-05-04 00:28:08.397603 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:08.422640 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:08.451842 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:08.474491 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:08.546782 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:08.547037 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:08.548112 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:08.551795 | orchestrator | 2025-05-04 00:28:08.552892 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-04 00:28:08.553697 | orchestrator | 2025-05-04 00:28:08.554099 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-04 00:28:08.557211 | orchestrator | Sunday 04 May 2025 00:28:08 +0000 (0:00:00.227) 0:00:00.333 ************ 2025-05-04 00:28:13.064555 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:13.065259 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:13.065511 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:13.066541 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:13.066963 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:13.067875 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:13.068052 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:13.068771 | orchestrator | 2025-05-04 00:28:13.069510 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-04 00:28:13.070241 | orchestrator | 2025-05-04 00:28:13.071645 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-04 00:28:13.072664 | orchestrator | Sunday 04 May 2025 00:28:13 +0000 (0:00:04.517) 0:00:04.850 ************ 2025-05-04 00:28:13.154483 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-04 00:28:13.156769 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-04 00:28:13.156840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-04 00:28:13.174554 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-04 00:28:13.216854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:28:13.217043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:28:13.217176 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-04 00:28:13.217205 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-04 00:28:13.219377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:28:13.541596 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-04 00:28:13.542424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:28:13.542854 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-04 00:28:13.543486 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-04 00:28:13.544393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-04 00:28:13.544640 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:13.545309 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-04 00:28:13.546282 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-04 00:28:13.546937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:28:13.546999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:28:13.547243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:28:13.547854 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:28:13.548700 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-04 00:28:13.549026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:28:13.549489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:28:13.550121 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:28:13.550506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:28:13.550847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:28:13.551395 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:28:13.552066 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:13.552297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:28:13.554514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:28:13.555028 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-04 00:28:13.555347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:28:13.555722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:28:13.556176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:28:13.556712 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-04 00:28:13.556942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:28:13.557378 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:13.557888 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:28:13.558306 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-04 00:28:13.558551 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:28:13.558992 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-04 00:28:13.559358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:28:13.559669 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:13.560144 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-04 00:28:13.560638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:28:13.560853 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:13.561225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-04 00:28:13.563324 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-04 00:28:13.564755 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-04 00:28:13.566124 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-04 00:28:13.566167 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:13.566247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-04 00:28:13.566489 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-04 00:28:13.568740 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-04 00:28:13.628371 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:13.628511 | orchestrator | 2025-05-04 00:28:13.628529 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-04 00:28:13.628544 | orchestrator | 2025-05-04 00:28:13.628558 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-04 00:28:13.628572 | orchestrator | Sunday 04 May 2025 00:28:13 +0000 (0:00:00.477) 0:00:05.327 ************ 2025-05-04 00:28:13.628603 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:13.665085 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:13.690284 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:13.715785 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:13.778943 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:13.783575 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:13.783734 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:13.783781 | orchestrator | 2025-05-04 00:28:13.784486 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-04 00:28:13.784915 | orchestrator | Sunday 04 May 2025 00:28:13 +0000 (0:00:00.235) 0:00:05.563 ************ 2025-05-04 00:28:14.975223 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:14.976639 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:14.977044 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:14.977083 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:14.982099 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:14.983323 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:14.983378 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:14.983528 | orchestrator | 2025-05-04 00:28:16.129834 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-04 00:28:16.129989 | orchestrator | Sunday 04 May 2025 00:28:14 +0000 (0:00:01.197) 0:00:06.760 ************ 2025-05-04 00:28:16.130143 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:16.130295 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:16.131110 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:16.134611 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:16.135084 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:16.135197 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:16.135216 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:16.135247 | orchestrator | 2025-05-04 00:28:16.135617 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-04 00:28:16.136012 | orchestrator | Sunday 04 May 2025 00:28:16 +0000 (0:00:01.153) 0:00:07.914 ************ 2025-05-04 00:28:16.422908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:16.423900 | orchestrator | 2025-05-04 00:28:16.423992 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-04 00:28:16.424838 | orchestrator | Sunday 04 May 2025 00:28:16 +0000 (0:00:00.293) 0:00:08.207 ************ 2025-05-04 00:28:18.709012 | orchestrator | changed: [testbed-manager] 2025-05-04 00:28:18.709305 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:18.710101 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:18.710187 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:18.711038 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:18.711409 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:18.712823 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:18.714394 | orchestrator | 2025-05-04 00:28:18.714903 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-04 00:28:18.715386 | orchestrator | Sunday 04 May 2025 00:28:18 +0000 (0:00:02.284) 0:00:10.491 ************ 2025-05-04 00:28:18.774930 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:18.968126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:18.968946 | orchestrator | 2025-05-04 00:28:18.969849 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-04 00:28:18.970976 | orchestrator | Sunday 04 May 2025 00:28:18 +0000 (0:00:00.260) 0:00:10.752 ************ 2025-05-04 00:28:19.998595 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:19.998773 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:20.002648 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:20.002924 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:20.002944 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:20.002954 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:20.002968 | orchestrator | 2025-05-04 00:28:20.003835 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-04 00:28:20.004339 | orchestrator | Sunday 04 May 2025 00:28:19 +0000 (0:00:01.030) 0:00:11.782 ************ 2025-05-04 00:28:20.069206 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:20.569908 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:20.570899 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:20.572122 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:20.572529 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:20.575962 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:20.674902 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:20.675016 | orchestrator | 2025-05-04 00:28:20.675034 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-04 00:28:20.675050 | orchestrator | Sunday 04 May 2025 00:28:20 +0000 (0:00:00.571) 0:00:12.355 ************ 2025-05-04 00:28:20.675080 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:20.699841 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:20.726125 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:21.012625 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:21.012862 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:21.013373 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:21.016839 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:21.097398 | orchestrator | 2025-05-04 00:28:21.097550 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-04 00:28:21.097569 | orchestrator | Sunday 04 May 2025 00:28:21 +0000 (0:00:00.441) 0:00:12.796 ************ 2025-05-04 00:28:21.097625 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:21.126648 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:21.148517 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:21.170733 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:21.232180 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:21.232358 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:21.233069 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:21.233704 | orchestrator | 2025-05-04 00:28:21.234371 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-04 00:28:21.234873 | orchestrator | Sunday 04 May 2025 00:28:21 +0000 (0:00:00.220) 0:00:13.016 ************ 2025-05-04 00:28:21.527311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:21.527543 | orchestrator | 2025-05-04 00:28:21.528107 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-04 00:28:21.528565 | orchestrator | Sunday 04 May 2025 00:28:21 +0000 (0:00:00.296) 0:00:13.312 ************ 2025-05-04 00:28:21.815689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:21.819586 | orchestrator | 2025-05-04 00:28:23.032159 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-04 00:28:23.032298 | orchestrator | Sunday 04 May 2025 00:28:21 +0000 (0:00:00.286) 0:00:13.599 ************ 2025-05-04 00:28:23.032334 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:23.035835 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:23.036000 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:23.036030 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:23.036052 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:23.036080 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:23.036640 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:23.037437 | orchestrator | 2025-05-04 00:28:23.038889 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-04 00:28:23.039204 | orchestrator | Sunday 04 May 2025 00:28:23 +0000 (0:00:01.216) 0:00:14.815 ************ 2025-05-04 00:28:23.117795 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:23.143889 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:23.172739 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:23.195833 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:23.262992 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:23.263516 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:23.265168 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:23.265528 | orchestrator | 2025-05-04 00:28:23.266569 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-04 00:28:23.267403 | orchestrator | Sunday 04 May 2025 00:28:23 +0000 (0:00:00.232) 0:00:15.048 ************ 2025-05-04 00:28:23.806519 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:23.807527 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:23.808396 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:23.809179 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:23.810542 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:23.811246 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:23.812124 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:23.812851 | orchestrator | 2025-05-04 00:28:23.813331 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-04 00:28:23.814159 | orchestrator | Sunday 04 May 2025 00:28:23 +0000 (0:00:00.542) 0:00:15.590 ************ 2025-05-04 00:28:23.887308 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:23.913044 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:23.936271 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:23.964828 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:24.034888 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:24.036411 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:24.037688 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:24.038760 | orchestrator | 2025-05-04 00:28:24.039174 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-04 00:28:24.040511 | orchestrator | Sunday 04 May 2025 00:28:24 +0000 (0:00:00.229) 0:00:15.819 ************ 2025-05-04 00:28:24.562076 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:24.562760 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:24.563658 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:24.565298 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:24.566278 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:24.567499 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:24.568564 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:24.569302 | orchestrator | 2025-05-04 00:28:24.570330 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-04 00:28:24.572413 | orchestrator | Sunday 04 May 2025 00:28:24 +0000 (0:00:00.526) 0:00:16.346 ************ 2025-05-04 00:28:25.667980 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:25.668356 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:25.669980 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:25.670666 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:25.671560 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:25.672652 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:25.673390 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:25.674346 | orchestrator | 2025-05-04 00:28:25.674822 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-04 00:28:25.675469 | orchestrator | Sunday 04 May 2025 00:28:25 +0000 (0:00:01.104) 0:00:17.450 ************ 2025-05-04 00:28:26.786585 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:26.787351 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:26.788550 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:26.789353 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:26.790252 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:26.790922 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:26.792061 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:26.793265 | orchestrator | 2025-05-04 00:28:26.794278 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-04 00:28:26.795414 | orchestrator | Sunday 04 May 2025 00:28:26 +0000 (0:00:01.119) 0:00:18.570 ************ 2025-05-04 00:28:27.097550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:27.098336 | orchestrator | 2025-05-04 00:28:27.098980 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-04 00:28:27.099833 | orchestrator | Sunday 04 May 2025 00:28:27 +0000 (0:00:00.310) 0:00:18.880 ************ 2025-05-04 00:28:27.191044 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:28.495974 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:28.497108 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:28.498515 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:28.499538 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:28.500440 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:28.501823 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:28.502962 | orchestrator | 2025-05-04 00:28:28.503971 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-04 00:28:28.505183 | orchestrator | Sunday 04 May 2025 00:28:28 +0000 (0:00:01.400) 0:00:20.280 ************ 2025-05-04 00:28:28.570487 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:28.604214 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:28.645221 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:28.669613 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:28.739713 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:28.739890 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:28.740501 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:28.740737 | orchestrator | 2025-05-04 00:28:28.741187 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-04 00:28:28.741680 | orchestrator | Sunday 04 May 2025 00:28:28 +0000 (0:00:00.244) 0:00:20.525 ************ 2025-05-04 00:28:28.841497 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:28.872865 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:28.897131 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:28.984588 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:28.985589 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:28.987361 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:28.989367 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:28.989621 | orchestrator | 2025-05-04 00:28:28.991081 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-04 00:28:28.991555 | orchestrator | Sunday 04 May 2025 00:28:28 +0000 (0:00:00.243) 0:00:20.769 ************ 2025-05-04 00:28:29.073786 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:29.114799 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:29.145316 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:29.172595 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:29.234503 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:29.235063 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:29.236534 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:29.237372 | orchestrator | 2025-05-04 00:28:29.239131 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-04 00:28:29.239594 | orchestrator | Sunday 04 May 2025 00:28:29 +0000 (0:00:00.250) 0:00:21.019 ************ 2025-05-04 00:28:29.533388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:29.534417 | orchestrator | 2025-05-04 00:28:29.534489 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-04 00:28:29.540390 | orchestrator | Sunday 04 May 2025 00:28:29 +0000 (0:00:00.297) 0:00:21.317 ************ 2025-05-04 00:28:30.099626 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:30.100359 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:30.100398 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:30.100423 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:30.100670 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:30.102244 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:30.103097 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:30.103960 | orchestrator | 2025-05-04 00:28:30.104586 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-04 00:28:30.105262 | orchestrator | Sunday 04 May 2025 00:28:30 +0000 (0:00:00.563) 0:00:21.880 ************ 2025-05-04 00:28:30.193311 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:30.217141 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:30.250680 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:30.276767 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:30.361390 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:30.361591 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:30.366411 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:31.437583 | orchestrator | 2025-05-04 00:28:31.437727 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-04 00:28:31.437747 | orchestrator | Sunday 04 May 2025 00:28:30 +0000 (0:00:00.265) 0:00:22.145 ************ 2025-05-04 00:28:31.437779 | orchestrator | changed: [testbed-manager] 2025-05-04 00:28:31.437859 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:31.438727 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:31.439953 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:31.441224 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:31.441941 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:31.442639 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:31.443218 | orchestrator | 2025-05-04 00:28:31.444034 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-04 00:28:31.448730 | orchestrator | Sunday 04 May 2025 00:28:31 +0000 (0:00:01.074) 0:00:23.219 ************ 2025-05-04 00:28:32.049644 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:32.051033 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:32.051921 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:32.051957 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:32.053083 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:32.053856 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:32.054887 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:32.055863 | orchestrator | 2025-05-04 00:28:32.056664 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-04 00:28:32.057208 | orchestrator | Sunday 04 May 2025 00:28:32 +0000 (0:00:00.613) 0:00:23.833 ************ 2025-05-04 00:28:33.381236 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:33.381517 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:33.381555 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:33.381677 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:33.381699 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:33.381715 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:33.381758 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:33.382358 | orchestrator | 2025-05-04 00:28:33.382503 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-04 00:28:45.568377 | orchestrator | Sunday 04 May 2025 00:28:33 +0000 (0:00:01.327) 0:00:25.160 ************ 2025-05-04 00:28:45.568600 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:45.568973 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:45.569005 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:45.569028 | orchestrator | changed: [testbed-manager] 2025-05-04 00:28:45.570293 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:45.572159 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:45.577531 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:45.577998 | orchestrator | 2025-05-04 00:28:45.578076 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-04 00:28:45.578098 | orchestrator | Sunday 04 May 2025 00:28:45 +0000 (0:00:12.187) 0:00:37.348 ************ 2025-05-04 00:28:45.642659 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:45.670353 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:45.697665 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:45.724678 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:45.786905 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:45.788752 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:45.789710 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:45.791363 | orchestrator | 2025-05-04 00:28:45.792280 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-04 00:28:45.792816 | orchestrator | Sunday 04 May 2025 00:28:45 +0000 (0:00:00.222) 0:00:37.571 ************ 2025-05-04 00:28:45.868058 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:45.893942 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:45.919842 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:45.945276 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:46.014067 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:46.014304 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:46.015404 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:46.015965 | orchestrator | 2025-05-04 00:28:46.016581 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-04 00:28:46.017202 | orchestrator | Sunday 04 May 2025 00:28:46 +0000 (0:00:00.228) 0:00:37.799 ************ 2025-05-04 00:28:46.092068 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:46.120801 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:46.155390 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:46.184766 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:46.252046 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:46.252263 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:46.252846 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:46.253651 | orchestrator | 2025-05-04 00:28:46.256142 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-04 00:28:46.256667 | orchestrator | Sunday 04 May 2025 00:28:46 +0000 (0:00:00.238) 0:00:38.038 ************ 2025-05-04 00:28:46.546169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:46.546368 | orchestrator | 2025-05-04 00:28:46.547624 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-04 00:28:47.851513 | orchestrator | Sunday 04 May 2025 00:28:46 +0000 (0:00:00.293) 0:00:38.331 ************ 2025-05-04 00:28:47.851665 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:47.853635 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:48.899089 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:48.899260 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:48.899282 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:48.899297 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:48.899312 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:48.899327 | orchestrator | 2025-05-04 00:28:48.899344 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-04 00:28:48.899360 | orchestrator | Sunday 04 May 2025 00:28:47 +0000 (0:00:01.304) 0:00:39.635 ************ 2025-05-04 00:28:48.899391 | orchestrator | changed: [testbed-manager] 2025-05-04 00:28:48.899520 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:48.899547 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:48.900038 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:48.900449 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:48.901075 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:48.901490 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:48.903300 | orchestrator | 2025-05-04 00:28:48.903782 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-04 00:28:48.904334 | orchestrator | Sunday 04 May 2025 00:28:48 +0000 (0:00:01.047) 0:00:40.682 ************ 2025-05-04 00:28:49.720698 | orchestrator | ok: [testbed-manager] 2025-05-04 00:28:49.721338 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:28:49.721384 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:28:49.722413 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:28:49.723036 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:28:49.723622 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:28:49.724165 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:28:49.724853 | orchestrator | 2025-05-04 00:28:49.725324 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-04 00:28:49.725832 | orchestrator | Sunday 04 May 2025 00:28:49 +0000 (0:00:00.822) 0:00:41.504 ************ 2025-05-04 00:28:50.060294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:28:50.060773 | orchestrator | 2025-05-04 00:28:50.061514 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-04 00:28:50.062600 | orchestrator | Sunday 04 May 2025 00:28:50 +0000 (0:00:00.339) 0:00:41.844 ************ 2025-05-04 00:28:51.050388 | orchestrator | changed: [testbed-manager] 2025-05-04 00:28:51.050909 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:28:51.051943 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:28:51.058430 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:28:51.060719 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:28:51.060770 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:28:51.062106 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:28:51.062710 | orchestrator | 2025-05-04 00:28:51.063268 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-04 00:28:51.063758 | orchestrator | Sunday 04 May 2025 00:28:51 +0000 (0:00:00.987) 0:00:42.831 ************ 2025-05-04 00:28:51.136007 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:28:51.167659 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:28:51.197687 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:28:51.227890 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:28:51.374911 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:28:51.377649 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:28:51.377818 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:28:51.377861 | orchestrator | 2025-05-04 00:28:51.377888 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-04 00:28:51.377954 | orchestrator | Sunday 04 May 2025 00:28:51 +0000 (0:00:00.325) 0:00:43.156 ************ 2025-05-04 00:29:02.548231 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:29:02.548932 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:29:02.548993 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:29:02.549021 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:29:02.551056 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:29:02.554585 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:29:02.554667 | orchestrator | changed: [testbed-manager] 2025-05-04 00:29:02.555236 | orchestrator | 2025-05-04 00:29:02.556288 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-04 00:29:02.556806 | orchestrator | Sunday 04 May 2025 00:29:02 +0000 (0:00:11.170) 0:00:54.326 ************ 2025-05-04 00:29:03.669021 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:03.670148 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:03.670672 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:03.672557 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:03.674935 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:03.675443 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:03.676897 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:03.677041 | orchestrator | 2025-05-04 00:29:03.677943 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-04 00:29:03.678619 | orchestrator | Sunday 04 May 2025 00:29:03 +0000 (0:00:01.126) 0:00:55.453 ************ 2025-05-04 00:29:05.402194 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:05.402498 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:05.403336 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:05.408067 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:05.409352 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:05.409426 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:05.409451 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:05.409505 | orchestrator | 2025-05-04 00:29:05.409530 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-04 00:29:05.409563 | orchestrator | Sunday 04 May 2025 00:29:05 +0000 (0:00:01.731) 0:00:57.185 ************ 2025-05-04 00:29:05.487059 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:05.519087 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:05.553798 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:05.584023 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:05.668629 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:05.668920 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:05.668955 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:05.668976 | orchestrator | 2025-05-04 00:29:05.669286 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-04 00:29:05.760889 | orchestrator | Sunday 04 May 2025 00:29:05 +0000 (0:00:00.268) 0:00:57.453 ************ 2025-05-04 00:29:05.760995 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:05.785628 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:05.814296 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:05.854832 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:05.928814 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:05.929417 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:05.929459 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:05.930312 | orchestrator | 2025-05-04 00:29:05.930716 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-04 00:29:05.931258 | orchestrator | Sunday 04 May 2025 00:29:05 +0000 (0:00:00.260) 0:00:57.713 ************ 2025-05-04 00:29:06.246363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:29:06.246662 | orchestrator | 2025-05-04 00:29:06.248269 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-04 00:29:07.780464 | orchestrator | Sunday 04 May 2025 00:29:06 +0000 (0:00:00.316) 0:00:58.030 ************ 2025-05-04 00:29:07.780672 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:07.780869 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:07.782810 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:07.783455 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:07.784279 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:07.784765 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:07.785256 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:07.785933 | orchestrator | 2025-05-04 00:29:07.786429 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-04 00:29:07.786878 | orchestrator | Sunday 04 May 2025 00:29:07 +0000 (0:00:01.529) 0:00:59.560 ************ 2025-05-04 00:29:08.401684 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:29:08.402082 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:29:08.402633 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:29:08.403309 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:29:08.404010 | orchestrator | changed: [testbed-manager] 2025-05-04 00:29:08.404600 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:29:08.405863 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:29:08.406307 | orchestrator | 2025-05-04 00:29:08.406342 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-04 00:29:08.407003 | orchestrator | Sunday 04 May 2025 00:29:08 +0000 (0:00:00.623) 0:01:00.184 ************ 2025-05-04 00:29:08.499270 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:08.530977 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:08.559309 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:08.589693 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:08.651531 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:08.651802 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:08.651860 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:08.651939 | orchestrator | 2025-05-04 00:29:08.651959 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-04 00:29:08.651981 | orchestrator | Sunday 04 May 2025 00:29:08 +0000 (0:00:00.252) 0:01:00.436 ************ 2025-05-04 00:29:09.756973 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:09.757266 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:09.760542 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:09.760687 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:09.760711 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:09.760727 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:09.760748 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:09.761288 | orchestrator | 2025-05-04 00:29:09.761796 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-04 00:29:09.762361 | orchestrator | Sunday 04 May 2025 00:29:09 +0000 (0:00:01.103) 0:01:01.540 ************ 2025-05-04 00:29:11.583849 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:29:11.584222 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:29:11.584263 | orchestrator | changed: [testbed-manager] 2025-05-04 00:29:11.584602 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:29:11.584638 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:29:11.585621 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:29:11.588955 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:29:11.589948 | orchestrator | 2025-05-04 00:29:11.589973 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-04 00:29:11.589992 | orchestrator | Sunday 04 May 2025 00:29:11 +0000 (0:00:01.827) 0:01:03.368 ************ 2025-05-04 00:29:13.772220 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:13.772610 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:13.773880 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:13.774513 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:13.775445 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:13.777267 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:13.778272 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:13.778839 | orchestrator | 2025-05-04 00:29:13.779433 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-04 00:29:13.779896 | orchestrator | Sunday 04 May 2025 00:29:13 +0000 (0:00:02.185) 0:01:05.553 ************ 2025-05-04 00:29:48.757389 | orchestrator | ok: [testbed-manager] 2025-05-04 00:29:48.758358 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:29:48.758389 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:29:48.758401 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:29:48.758412 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:29:48.758423 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:29:48.758435 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:29:48.758446 | orchestrator | 2025-05-04 00:29:48.758458 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-04 00:29:48.758477 | orchestrator | Sunday 04 May 2025 00:29:48 +0000 (0:00:34.980) 0:01:40.534 ************ 2025-05-04 00:31:09.787802 | orchestrator | changed: [testbed-manager] 2025-05-04 00:31:09.789278 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:31:09.789343 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:31:09.789371 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:31:09.790132 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:31:09.790828 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:31:09.792123 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:31:09.792783 | orchestrator | 2025-05-04 00:31:09.796375 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-04 00:31:09.798418 | orchestrator | Sunday 04 May 2025 00:31:09 +0000 (0:01:21.030) 0:03:01.564 ************ 2025-05-04 00:31:11.373166 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:11.376627 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:11.376712 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:11.376776 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:11.376797 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:11.377569 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:11.378253 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:11.379260 | orchestrator | 2025-05-04 00:31:11.379789 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-04 00:31:11.380648 | orchestrator | Sunday 04 May 2025 00:31:11 +0000 (0:00:01.591) 0:03:03.156 ************ 2025-05-04 00:31:24.104122 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:24.104326 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:24.104352 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:24.104367 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:24.104381 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:24.104402 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:24.104550 | orchestrator | changed: [testbed-manager] 2025-05-04 00:31:24.104870 | orchestrator | 2025-05-04 00:31:24.105084 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-04 00:31:24.105757 | orchestrator | Sunday 04 May 2025 00:31:24 +0000 (0:00:12.725) 0:03:15.882 ************ 2025-05-04 00:31:24.481470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-04 00:31:24.481755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-04 00:31:24.485823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-04 00:31:24.485944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-04 00:31:24.485964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-04 00:31:24.485979 | orchestrator | 2025-05-04 00:31:24.485998 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-04 00:31:24.486827 | orchestrator | Sunday 04 May 2025 00:31:24 +0000 (0:00:00.383) 0:03:16.265 ************ 2025-05-04 00:31:24.542119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-04 00:31:24.570550 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:24.571208 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-04 00:31:24.571246 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-04 00:31:24.617199 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:31:24.654294 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-04 00:31:24.654442 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:31:24.687821 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:31:25.184394 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-04 00:31:25.185357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-04 00:31:25.185621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-04 00:31:25.186840 | orchestrator | 2025-05-04 00:31:25.188680 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-04 00:31:25.189575 | orchestrator | Sunday 04 May 2025 00:31:25 +0000 (0:00:00.699) 0:03:16.965 ************ 2025-05-04 00:31:25.246513 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-04 00:31:25.246797 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-04 00:31:25.247348 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-04 00:31:25.247909 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-04 00:31:25.248639 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-04 00:31:25.249208 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-04 00:31:25.289964 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-04 00:31:25.291229 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-04 00:31:25.291734 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-04 00:31:25.295346 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-04 00:31:25.295901 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-04 00:31:25.296193 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-04 00:31:25.296443 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-04 00:31:25.296579 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-04 00:31:25.296792 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-04 00:31:25.297033 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-04 00:31:25.298874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-04 00:31:25.359243 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-04 00:31:25.359379 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-04 00:31:25.359413 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-04 00:31:25.359515 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:25.359569 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-04 00:31:25.359588 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-04 00:31:25.359839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-04 00:31:25.359872 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-04 00:31:25.360302 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-04 00:31:25.363054 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-04 00:31:25.366574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-04 00:31:25.366921 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-04 00:31:25.367591 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-04 00:31:25.368262 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-04 00:31:25.368705 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-04 00:31:25.369226 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-04 00:31:25.369677 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-04 00:31:25.370231 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-04 00:31:25.370674 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-04 00:31:25.371276 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-04 00:31:25.399840 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:31:25.399978 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-04 00:31:25.400435 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-04 00:31:25.401175 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-04 00:31:25.401809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-04 00:31:25.436425 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:31:29.909305 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:31:29.910141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-04 00:31:29.910207 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-04 00:31:29.913917 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-04 00:31:29.914715 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-04 00:31:29.914827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-04 00:31:29.916345 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-04 00:31:29.916666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-04 00:31:29.916711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-04 00:31:29.919034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-04 00:31:29.919212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-04 00:31:29.919237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-04 00:31:29.919251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-04 00:31:29.919270 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-04 00:31:29.919497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-04 00:31:29.919936 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-04 00:31:29.920582 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-04 00:31:29.920937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-04 00:31:29.921249 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-04 00:31:29.921862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-04 00:31:29.922715 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-04 00:31:29.922837 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-04 00:31:29.923256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-04 00:31:29.923676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-04 00:31:29.924932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-04 00:31:29.925030 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-04 00:31:29.925057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-04 00:31:29.925572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-04 00:31:29.925603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-04 00:31:29.926078 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-04 00:31:29.926358 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-04 00:31:29.926683 | orchestrator | 2025-05-04 00:31:29.927722 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-04 00:31:29.927838 | orchestrator | Sunday 04 May 2025 00:31:29 +0000 (0:00:04.725) 0:03:21.691 ************ 2025-05-04 00:31:30.550166 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.550623 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.552766 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.554151 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.555030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.555864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.555938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-04 00:31:30.556228 | orchestrator | 2025-05-04 00:31:30.557082 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-04 00:31:30.557461 | orchestrator | Sunday 04 May 2025 00:31:30 +0000 (0:00:00.643) 0:03:22.334 ************ 2025-05-04 00:31:30.617577 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-04 00:31:30.643657 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:30.721103 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-04 00:31:31.047153 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:31:31.047601 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-04 00:31:31.049205 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:31:31.049783 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-04 00:31:31.050270 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:31:31.050827 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-04 00:31:31.051455 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-04 00:31:31.052058 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-04 00:31:31.052897 | orchestrator | 2025-05-04 00:31:31.053293 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-04 00:31:31.053808 | orchestrator | Sunday 04 May 2025 00:31:31 +0000 (0:00:00.495) 0:03:22.830 ************ 2025-05-04 00:31:31.105373 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-04 00:31:31.130681 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:31.208979 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-04 00:31:32.583491 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-04 00:31:32.583738 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:31:32.586735 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:31:32.586827 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-04 00:31:32.586864 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:31:32.587694 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-04 00:31:32.588367 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-04 00:31:32.589278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-04 00:31:32.589936 | orchestrator | 2025-05-04 00:31:32.591084 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-04 00:31:32.591663 | orchestrator | Sunday 04 May 2025 00:31:32 +0000 (0:00:01.535) 0:03:24.365 ************ 2025-05-04 00:31:32.676741 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:32.707286 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:31:32.737326 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:31:32.768308 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:31:32.905940 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:31:32.906777 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:31:32.907664 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:31:32.910721 | orchestrator | 2025-05-04 00:31:32.914129 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-04 00:31:38.598705 | orchestrator | Sunday 04 May 2025 00:31:32 +0000 (0:00:00.325) 0:03:24.691 ************ 2025-05-04 00:31:38.598884 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:38.598980 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:38.599003 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:38.599715 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:38.600512 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:38.601507 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:38.602264 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:38.602304 | orchestrator | 2025-05-04 00:31:38.602962 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-04 00:31:38.603367 | orchestrator | Sunday 04 May 2025 00:31:38 +0000 (0:00:05.691) 0:03:30.382 ************ 2025-05-04 00:31:38.699069 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-04 00:31:38.738648 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-04 00:31:38.738796 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:38.783163 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-04 00:31:38.822991 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:31:38.823125 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-04 00:31:38.823618 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:31:38.866899 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-04 00:31:38.867791 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:31:38.868283 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-04 00:31:38.944593 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:31:38.945210 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:31:38.946128 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-04 00:31:38.946952 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:31:38.949420 | orchestrator | 2025-05-04 00:31:38.950642 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-04 00:31:38.951396 | orchestrator | Sunday 04 May 2025 00:31:38 +0000 (0:00:00.347) 0:03:30.730 ************ 2025-05-04 00:31:39.996976 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-04 00:31:39.997742 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-04 00:31:39.997791 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-04 00:31:39.999335 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-04 00:31:39.999910 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-04 00:31:39.999945 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-04 00:31:40.001200 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-04 00:31:40.001550 | orchestrator | 2025-05-04 00:31:40.002561 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-04 00:31:40.003118 | orchestrator | Sunday 04 May 2025 00:31:39 +0000 (0:00:01.048) 0:03:31.779 ************ 2025-05-04 00:31:40.537627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:31:40.538328 | orchestrator | 2025-05-04 00:31:40.539546 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-04 00:31:40.540859 | orchestrator | Sunday 04 May 2025 00:31:40 +0000 (0:00:00.541) 0:03:32.321 ************ 2025-05-04 00:31:41.676133 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:41.676382 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:41.677271 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:41.677305 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:41.677327 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:41.678596 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:41.678637 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:41.679057 | orchestrator | 2025-05-04 00:31:41.679557 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-04 00:31:41.680352 | orchestrator | Sunday 04 May 2025 00:31:41 +0000 (0:00:01.138) 0:03:33.460 ************ 2025-05-04 00:31:42.305971 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:42.306230 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:42.307286 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:42.307707 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:42.307742 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:42.308201 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:42.308634 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:42.310151 | orchestrator | 2025-05-04 00:31:42.310705 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-04 00:31:42.311209 | orchestrator | Sunday 04 May 2025 00:31:42 +0000 (0:00:00.627) 0:03:34.087 ************ 2025-05-04 00:31:42.923028 | orchestrator | changed: [testbed-manager] 2025-05-04 00:31:42.923786 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:31:42.925052 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:31:42.926077 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:31:42.927628 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:31:42.928578 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:31:42.929369 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:31:42.932472 | orchestrator | 2025-05-04 00:31:42.933235 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-04 00:31:42.933972 | orchestrator | Sunday 04 May 2025 00:31:42 +0000 (0:00:00.619) 0:03:34.707 ************ 2025-05-04 00:31:43.484659 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:43.485941 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:43.486755 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:43.488166 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:43.488812 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:43.489881 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:43.491111 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:43.491845 | orchestrator | 2025-05-04 00:31:43.492868 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-04 00:31:43.493472 | orchestrator | Sunday 04 May 2025 00:31:43 +0000 (0:00:00.560) 0:03:35.268 ************ 2025-05-04 00:31:44.447285 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317060.6170938, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.447888 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317066.8290668, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.448117 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317064.8213396, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.450148 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317078.9499478, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.451362 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317072.833019, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.451403 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317067.0813951, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.453141 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746317066.1889274, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.454380 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746317083.0290937, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.455353 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746316997.7828877, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.457203 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746316995.6974473, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.458350 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746316992.0562036, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.459748 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746316996.5101852, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.460635 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746316993.8335323, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.461182 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746316993.0367222, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 00:31:44.461721 | orchestrator | 2025-05-04 00:31:44.462312 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-04 00:31:44.462849 | orchestrator | Sunday 04 May 2025 00:31:44 +0000 (0:00:00.960) 0:03:36.228 ************ 2025-05-04 00:31:45.519235 | orchestrator | changed: [testbed-manager] 2025-05-04 00:31:45.522250 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:31:45.522308 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:31:45.522811 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:31:45.523647 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:31:45.524178 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:31:45.524946 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:31:45.525651 | orchestrator | 2025-05-04 00:31:45.526753 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-04 00:31:45.526927 | orchestrator | Sunday 04 May 2025 00:31:45 +0000 (0:00:01.073) 0:03:37.302 ************ 2025-05-04 00:31:46.658696 | orchestrator | changed: [testbed-manager] 2025-05-04 00:31:46.658889 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:31:46.660269 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:31:46.660731 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:31:46.661120 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:31:46.661620 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:31:46.662063 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:31:46.662422 | orchestrator | 2025-05-04 00:31:46.662858 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-04 00:31:46.663192 | orchestrator | Sunday 04 May 2025 00:31:46 +0000 (0:00:01.139) 0:03:38.441 ************ 2025-05-04 00:31:46.739821 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:31:46.771643 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:31:46.804231 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:31:46.835822 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:31:46.868801 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:31:46.927911 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:31:46.929045 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:31:46.932706 | orchestrator | 2025-05-04 00:31:46.933582 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-04 00:31:46.934816 | orchestrator | Sunday 04 May 2025 00:31:46 +0000 (0:00:00.272) 0:03:38.713 ************ 2025-05-04 00:31:47.727492 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:47.727755 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:47.727792 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:47.727845 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:47.728349 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:47.728964 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:47.730112 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:47.731395 | orchestrator | 2025-05-04 00:31:47.731859 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-04 00:31:47.733864 | orchestrator | Sunday 04 May 2025 00:31:47 +0000 (0:00:00.796) 0:03:39.510 ************ 2025-05-04 00:31:48.141045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:31:48.143337 | orchestrator | 2025-05-04 00:31:48.143461 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-04 00:31:55.604991 | orchestrator | Sunday 04 May 2025 00:31:48 +0000 (0:00:00.414) 0:03:39.924 ************ 2025-05-04 00:31:55.605262 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:55.605353 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:31:55.605378 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:31:55.608824 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:31:55.609506 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:31:55.610575 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:31:55.611251 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:31:55.612162 | orchestrator | 2025-05-04 00:31:55.612778 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-04 00:31:55.614109 | orchestrator | Sunday 04 May 2025 00:31:55 +0000 (0:00:07.463) 0:03:47.388 ************ 2025-05-04 00:31:56.834227 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:56.835377 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:56.836041 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:56.836076 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:56.838678 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:56.839346 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:56.842618 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:56.842726 | orchestrator | 2025-05-04 00:31:56.842753 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-04 00:31:56.844619 | orchestrator | Sunday 04 May 2025 00:31:56 +0000 (0:00:01.227) 0:03:48.615 ************ 2025-05-04 00:31:58.779731 | orchestrator | ok: [testbed-manager] 2025-05-04 00:31:58.781190 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:31:58.782239 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:31:58.785837 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:31:58.786384 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:31:58.789667 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:31:58.789959 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:31:58.789973 | orchestrator | 2025-05-04 00:31:58.789980 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-04 00:31:58.789990 | orchestrator | Sunday 04 May 2025 00:31:58 +0000 (0:00:01.946) 0:03:50.561 ************ 2025-05-04 00:31:59.166088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:31:59.166754 | orchestrator | 2025-05-04 00:31:59.167005 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-04 00:31:59.168075 | orchestrator | Sunday 04 May 2025 00:31:59 +0000 (0:00:00.387) 0:03:50.949 ************ 2025-05-04 00:32:07.348219 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:32:07.348474 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:32:07.350189 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:32:07.353000 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:32:07.357153 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:32:07.357367 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:32:07.358881 | orchestrator | changed: [testbed-manager] 2025-05-04 00:32:07.362261 | orchestrator | 2025-05-04 00:32:07.362425 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-04 00:32:07.364294 | orchestrator | Sunday 04 May 2025 00:32:07 +0000 (0:00:08.180) 0:03:59.130 ************ 2025-05-04 00:32:07.928687 | orchestrator | changed: [testbed-manager] 2025-05-04 00:32:07.929450 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:32:07.930369 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:32:07.930737 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:32:07.931662 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:32:07.932068 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:32:07.932673 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:32:07.933048 | orchestrator | 2025-05-04 00:32:07.933658 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-04 00:32:07.933999 | orchestrator | Sunday 04 May 2025 00:32:07 +0000 (0:00:00.580) 0:03:59.710 ************ 2025-05-04 00:32:09.050098 | orchestrator | changed: [testbed-manager] 2025-05-04 00:32:09.050976 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:32:09.051656 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:32:09.052621 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:32:09.053254 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:32:09.054389 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:32:09.055484 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:32:09.059078 | orchestrator | 2025-05-04 00:32:09.060063 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-04 00:32:09.060125 | orchestrator | Sunday 04 May 2025 00:32:09 +0000 (0:00:01.124) 0:04:00.834 ************ 2025-05-04 00:32:10.138085 | orchestrator | changed: [testbed-manager] 2025-05-04 00:32:10.140834 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:32:10.140943 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:32:10.140965 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:32:10.140981 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:32:10.140997 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:32:10.141017 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:32:10.142456 | orchestrator | 2025-05-04 00:32:10.142574 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-04 00:32:10.143140 | orchestrator | Sunday 04 May 2025 00:32:10 +0000 (0:00:01.082) 0:04:01.917 ************ 2025-05-04 00:32:10.232972 | orchestrator | ok: [testbed-manager] 2025-05-04 00:32:10.332319 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:32:10.372766 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:32:10.412075 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:32:10.478170 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:32:10.478289 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:32:10.478912 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:32:10.480137 | orchestrator | 2025-05-04 00:32:10.480489 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-04 00:32:10.480981 | orchestrator | Sunday 04 May 2025 00:32:10 +0000 (0:00:00.345) 0:04:02.262 ************ 2025-05-04 00:32:10.568343 | orchestrator | ok: [testbed-manager] 2025-05-04 00:32:10.648093 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:32:10.682882 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:32:10.767090 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:32:10.857211 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:32:10.857423 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:32:10.858118 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:32:10.858983 | orchestrator | 2025-05-04 00:32:10.859429 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-04 00:32:10.859989 | orchestrator | Sunday 04 May 2025 00:32:10 +0000 (0:00:00.376) 0:04:02.639 ************ 2025-05-04 00:32:10.955471 | orchestrator | ok: [testbed-manager] 2025-05-04 00:32:11.002554 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:32:11.058408 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:32:11.100938 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:32:11.145218 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:32:11.215411 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:32:11.215869 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:32:11.216404 | orchestrator | 2025-05-04 00:32:11.217135 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-04 00:32:11.217752 | orchestrator | Sunday 04 May 2025 00:32:11 +0000 (0:00:00.361) 0:04:03.001 ************ 2025-05-04 00:32:17.100852 | orchestrator | ok: [testbed-manager] 2025-05-04 00:32:17.101053 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:32:17.101080 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:32:17.101103 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:32:17.103069 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:32:17.103663 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:32:17.103696 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:32:17.103720 | orchestrator | 2025-05-04 00:32:17.104054 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-04 00:32:17.104826 | orchestrator | Sunday 04 May 2025 00:32:17 +0000 (0:00:05.882) 0:04:08.883 ************ 2025-05-04 00:32:17.477088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:32:17.477275 | orchestrator | 2025-05-04 00:32:17.480891 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-04 00:32:17.481174 | orchestrator | Sunday 04 May 2025 00:32:17 +0000 (0:00:00.375) 0:04:09.259 ************ 2025-05-04 00:32:17.575073 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.575447 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-04 00:32:17.576484 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.580047 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-04 00:32:17.610888 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:32:17.653408 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:32:17.689524 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.689613 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-04 00:32:17.689631 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.689656 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:32:17.689721 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-04 00:32:17.689947 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.731944 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:32:17.732748 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-04 00:32:17.733023 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.804061 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:32:17.804273 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-04 00:32:17.804702 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:32:17.806115 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-04 00:32:17.810321 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-04 00:32:17.811577 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:32:17.812909 | orchestrator | 2025-05-04 00:32:17.813051 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-04 00:32:17.813341 | orchestrator | Sunday 04 May 2025 00:32:17 +0000 (0:00:00.330) 0:04:09.589 ************ 2025-05-04 00:32:18.280928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:32:18.281305 | orchestrator | 2025-05-04 00:32:18.281886 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-04 00:32:18.282006 | orchestrator | Sunday 04 May 2025 00:32:18 +0000 (0:00:00.476) 0:04:10.066 ************ 2025-05-04 00:32:18.358212 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-04 00:32:18.393972 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-04 00:32:18.394136 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:32:18.394365 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-04 00:32:18.428857 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:32:18.483360 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-04 00:32:18.483445 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:32:18.484066 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-04 00:32:18.522566 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:32:18.589943 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:32:18.590273 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-04 00:32:18.590876 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:32:18.591444 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-04 00:32:18.591908 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:32:18.592464 | orchestrator | 2025-05-04 00:32:18.595506 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-04 00:32:18.985518 | orchestrator | Sunday 04 May 2025 00:32:18 +0000 (0:00:00.309) 0:04:10.376 ************ 2025-05-04 00:32:18.985712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:32:18.986370 | orchestrator | 2025-05-04 00:32:18.987203 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-04 00:32:18.990658 | orchestrator | Sunday 04 May 2025 00:32:18 +0000 (0:00:00.394) 0:04:10.770 ************ 2025-05-04 00:32:52.816175 | orchestrator | changed: [testbed-manager] 2025-05-04 00:32:52.816382 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:32:52.816409 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:32:52.816431 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:32:52.816923 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:32:52.817608 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:32:52.818267 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:32:52.818940 | orchestrator | 2025-05-04 00:32:52.819461 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-04 00:32:52.819891 | orchestrator | Sunday 04 May 2025 00:32:52 +0000 (0:00:33.824) 0:04:44.595 ************ 2025-05-04 00:33:00.340728 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:00.340980 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:00.341085 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:00.342724 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:00.343726 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:00.344065 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:00.344546 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:00.345291 | orchestrator | 2025-05-04 00:33:00.345949 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-04 00:33:00.346704 | orchestrator | Sunday 04 May 2025 00:33:00 +0000 (0:00:07.528) 0:04:52.123 ************ 2025-05-04 00:33:07.172031 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:07.172275 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:07.173514 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:07.173704 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:07.175171 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:07.176315 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:07.177598 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:07.178120 | orchestrator | 2025-05-04 00:33:07.178801 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-04 00:33:07.179606 | orchestrator | Sunday 04 May 2025 00:33:07 +0000 (0:00:06.832) 0:04:58.956 ************ 2025-05-04 00:33:08.759478 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:08.759653 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:33:08.760052 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:33:08.760620 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:33:08.761679 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:33:08.763017 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:33:08.763191 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:33:08.763817 | orchestrator | 2025-05-04 00:33:08.764572 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-04 00:33:08.765178 | orchestrator | Sunday 04 May 2025 00:33:08 +0000 (0:00:01.585) 0:05:00.542 ************ 2025-05-04 00:33:13.812389 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:13.813321 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:13.814686 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:13.815636 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:13.817095 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:13.818061 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:13.818703 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:13.819056 | orchestrator | 2025-05-04 00:33:13.819867 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-04 00:33:13.820307 | orchestrator | Sunday 04 May 2025 00:33:13 +0000 (0:00:05.052) 0:05:05.594 ************ 2025-05-04 00:33:14.236125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:33:14.236605 | orchestrator | 2025-05-04 00:33:14.236654 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-04 00:33:14.240470 | orchestrator | Sunday 04 May 2025 00:33:14 +0000 (0:00:00.424) 0:05:06.019 ************ 2025-05-04 00:33:14.957209 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:14.957953 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:14.959067 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:14.959866 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:14.961931 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:14.962602 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:14.963183 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:14.963768 | orchestrator | 2025-05-04 00:33:14.964469 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-04 00:33:14.965201 | orchestrator | Sunday 04 May 2025 00:33:14 +0000 (0:00:00.720) 0:05:06.739 ************ 2025-05-04 00:33:16.528686 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:16.529449 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:33:16.529956 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:33:16.531014 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:33:16.531457 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:33:16.532541 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:33:16.533149 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:33:16.533654 | orchestrator | 2025-05-04 00:33:16.534297 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-04 00:33:16.534830 | orchestrator | Sunday 04 May 2025 00:33:16 +0000 (0:00:01.573) 0:05:08.313 ************ 2025-05-04 00:33:17.340693 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:17.340916 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:17.341535 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:17.344074 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:17.344387 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:17.344805 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:17.345815 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:17.346260 | orchestrator | 2025-05-04 00:33:17.346699 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-04 00:33:17.347109 | orchestrator | Sunday 04 May 2025 00:33:17 +0000 (0:00:00.810) 0:05:09.123 ************ 2025-05-04 00:33:17.440160 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:33:17.470778 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:17.502367 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:17.544903 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:17.612778 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:17.614291 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:17.614941 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:17.616004 | orchestrator | 2025-05-04 00:33:17.618989 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-04 00:33:17.622998 | orchestrator | Sunday 04 May 2025 00:33:17 +0000 (0:00:00.274) 0:05:09.398 ************ 2025-05-04 00:33:17.710441 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:33:17.741801 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:17.772463 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:17.802230 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:17.974356 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:17.976965 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:17.978336 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:17.979098 | orchestrator | 2025-05-04 00:33:17.980549 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-04 00:33:17.981663 | orchestrator | Sunday 04 May 2025 00:33:17 +0000 (0:00:00.355) 0:05:09.753 ************ 2025-05-04 00:33:18.098203 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:18.131137 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:33:18.169865 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:33:18.207061 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:33:18.271151 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:33:18.271661 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:33:18.274887 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:33:18.275929 | orchestrator | 2025-05-04 00:33:18.275971 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-04 00:33:18.275997 | orchestrator | Sunday 04 May 2025 00:33:18 +0000 (0:00:00.303) 0:05:10.057 ************ 2025-05-04 00:33:18.395763 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:33:18.443451 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:18.482821 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:18.517150 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:18.572377 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:18.573364 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:18.574401 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:18.575112 | orchestrator | 2025-05-04 00:33:18.576193 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-04 00:33:18.576395 | orchestrator | Sunday 04 May 2025 00:33:18 +0000 (0:00:00.300) 0:05:10.357 ************ 2025-05-04 00:33:18.680852 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:18.709053 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:33:18.752748 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:33:18.783425 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:33:18.847240 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:33:18.848534 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:33:18.849446 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:33:18.850967 | orchestrator | 2025-05-04 00:33:18.851717 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-04 00:33:18.852545 | orchestrator | Sunday 04 May 2025 00:33:18 +0000 (0:00:00.275) 0:05:10.633 ************ 2025-05-04 00:33:18.933767 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:33:18.963871 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:18.995999 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:19.030828 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:19.064840 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:19.118406 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:19.118639 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:19.119687 | orchestrator | 2025-05-04 00:33:19.120195 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-04 00:33:19.120960 | orchestrator | Sunday 04 May 2025 00:33:19 +0000 (0:00:00.270) 0:05:10.903 ************ 2025-05-04 00:33:19.197992 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:33:19.226470 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:19.258884 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:19.288073 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:19.346222 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:19.517723 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:19.521031 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:19.521246 | orchestrator | 2025-05-04 00:33:19.521274 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-04 00:33:19.521298 | orchestrator | Sunday 04 May 2025 00:33:19 +0000 (0:00:00.397) 0:05:11.301 ************ 2025-05-04 00:33:19.917097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:33:19.917366 | orchestrator | 2025-05-04 00:33:19.918310 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-04 00:33:19.921843 | orchestrator | Sunday 04 May 2025 00:33:19 +0000 (0:00:00.400) 0:05:11.701 ************ 2025-05-04 00:33:20.748954 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:20.749489 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:33:20.749538 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:33:20.749604 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:33:20.750672 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:33:20.751800 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:33:20.751973 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:33:20.752004 | orchestrator | 2025-05-04 00:33:20.752409 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-04 00:33:20.753127 | orchestrator | Sunday 04 May 2025 00:33:20 +0000 (0:00:00.831) 0:05:12.533 ************ 2025-05-04 00:33:23.433828 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:33:23.434364 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:33:23.434783 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:33:23.435771 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:33:23.436135 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:33:23.437245 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:33:23.437989 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:23.438510 | orchestrator | 2025-05-04 00:33:23.439032 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-04 00:33:23.439733 | orchestrator | Sunday 04 May 2025 00:33:23 +0000 (0:00:02.685) 0:05:15.219 ************ 2025-05-04 00:33:23.504669 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-04 00:33:23.588227 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-04 00:33:23.588366 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-04 00:33:23.588729 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-04 00:33:23.589418 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-04 00:33:23.590305 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-04 00:33:23.659107 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:33:23.659347 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-04 00:33:23.660022 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-04 00:33:23.740416 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:23.740686 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-04 00:33:23.740776 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-04 00:33:23.741446 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-04 00:33:23.742234 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-04 00:33:23.814670 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:23.814891 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-04 00:33:23.816088 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-04 00:33:23.816666 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-04 00:33:23.882004 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:23.882981 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-04 00:33:23.884303 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-04 00:33:23.885530 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-04 00:33:24.012627 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:24.013861 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:24.014715 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-04 00:33:24.015320 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-04 00:33:24.015891 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-04 00:33:24.016748 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:24.017058 | orchestrator | 2025-05-04 00:33:24.017641 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-04 00:33:24.018104 | orchestrator | Sunday 04 May 2025 00:33:24 +0000 (0:00:00.576) 0:05:15.795 ************ 2025-05-04 00:33:28.786137 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:28.786336 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:28.786678 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:28.787958 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:28.791859 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:28.792193 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:28.792229 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:28.792244 | orchestrator | 2025-05-04 00:33:28.792259 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-04 00:33:28.792282 | orchestrator | Sunday 04 May 2025 00:33:28 +0000 (0:00:04.774) 0:05:20.570 ************ 2025-05-04 00:33:29.838173 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:29.838701 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:29.839809 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:29.840956 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:29.841899 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:29.842838 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:29.843834 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:29.844338 | orchestrator | 2025-05-04 00:33:29.845025 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-04 00:33:29.846723 | orchestrator | Sunday 04 May 2025 00:33:29 +0000 (0:00:01.048) 0:05:21.619 ************ 2025-05-04 00:33:36.676200 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:36.677487 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:36.679095 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:36.679767 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:36.680378 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:36.681005 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:36.681857 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:36.682936 | orchestrator | 2025-05-04 00:33:36.683659 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-04 00:33:36.685164 | orchestrator | Sunday 04 May 2025 00:33:36 +0000 (0:00:06.836) 0:05:28.456 ************ 2025-05-04 00:33:39.713206 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:39.713816 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:39.714254 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:39.715860 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:39.716279 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:39.716959 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:39.717486 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:39.717981 | orchestrator | 2025-05-04 00:33:39.718502 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-04 00:33:39.719028 | orchestrator | Sunday 04 May 2025 00:33:39 +0000 (0:00:03.040) 0:05:31.496 ************ 2025-05-04 00:33:41.178180 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:41.179007 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:41.179066 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:41.180897 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:41.181658 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:41.182396 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:41.183628 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:41.184644 | orchestrator | 2025-05-04 00:33:41.185628 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-04 00:33:41.186641 | orchestrator | Sunday 04 May 2025 00:33:41 +0000 (0:00:01.462) 0:05:32.958 ************ 2025-05-04 00:33:42.497937 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:42.498238 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:42.498618 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:42.499991 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:42.500744 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:42.501268 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:42.502315 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:42.502672 | orchestrator | 2025-05-04 00:33:42.502708 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-04 00:33:42.503369 | orchestrator | Sunday 04 May 2025 00:33:42 +0000 (0:00:01.319) 0:05:34.278 ************ 2025-05-04 00:33:42.720733 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:33:42.796943 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:33:42.877551 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:33:42.966281 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:33:43.124993 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:33:43.125655 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:33:43.126895 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:43.127223 | orchestrator | 2025-05-04 00:33:43.130602 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-04 00:33:52.435938 | orchestrator | Sunday 04 May 2025 00:33:43 +0000 (0:00:00.630) 0:05:34.908 ************ 2025-05-04 00:33:52.436155 | orchestrator | ok: [testbed-manager] 2025-05-04 00:33:52.436281 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:52.436615 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:52.438470 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:52.441185 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:52.443764 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:52.445444 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:52.445703 | orchestrator | 2025-05-04 00:33:52.445739 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-04 00:33:52.445848 | orchestrator | Sunday 04 May 2025 00:33:52 +0000 (0:00:09.310) 0:05:44.218 ************ 2025-05-04 00:33:53.392094 | orchestrator | changed: [testbed-manager] 2025-05-04 00:33:53.393068 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:33:53.393356 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:33:53.395207 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:33:53.395753 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:33:53.397605 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:33:53.398628 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:33:53.398729 | orchestrator | 2025-05-04 00:33:53.399838 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-04 00:33:53.400307 | orchestrator | Sunday 04 May 2025 00:33:53 +0000 (0:00:00.954) 0:05:45.173 ************ 2025-05-04 00:34:05.560046 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:05.560334 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:05.560364 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:05.560379 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:05.560393 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:05.560407 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:05.560462 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:05.560521 | orchestrator | 2025-05-04 00:34:05.560540 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-04 00:34:05.560561 | orchestrator | Sunday 04 May 2025 00:34:05 +0000 (0:00:12.163) 0:05:57.336 ************ 2025-05-04 00:34:17.814743 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:17.816865 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:17.816915 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:17.817638 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:17.817674 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:17.817691 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:17.817732 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:17.817754 | orchestrator | 2025-05-04 00:34:17.818161 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-04 00:34:17.819508 | orchestrator | Sunday 04 May 2025 00:34:17 +0000 (0:00:12.259) 0:06:09.596 ************ 2025-05-04 00:34:18.247008 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-04 00:34:19.014900 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-04 00:34:19.016720 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-04 00:34:19.016777 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-04 00:34:19.016794 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-04 00:34:19.016818 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-04 00:34:19.019102 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-04 00:34:19.019244 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-04 00:34:19.019601 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-04 00:34:19.020105 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-04 00:34:19.020351 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-04 00:34:19.020918 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-04 00:34:19.021342 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-04 00:34:19.021767 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-04 00:34:19.022125 | orchestrator | 2025-05-04 00:34:19.022716 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-04 00:34:19.023331 | orchestrator | Sunday 04 May 2025 00:34:18 +0000 (0:00:01.191) 0:06:10.787 ************ 2025-05-04 00:34:19.146762 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:19.214308 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:19.277877 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:19.355207 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:19.417477 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:19.529322 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:19.532691 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:19.533312 | orchestrator | 2025-05-04 00:34:19.533354 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-04 00:34:19.533379 | orchestrator | Sunday 04 May 2025 00:34:19 +0000 (0:00:00.524) 0:06:11.312 ************ 2025-05-04 00:34:23.041440 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:23.041938 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:23.044488 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:23.045450 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:23.045502 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:23.045988 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:23.046948 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:23.047720 | orchestrator | 2025-05-04 00:34:23.048639 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-04 00:34:23.049343 | orchestrator | Sunday 04 May 2025 00:34:23 +0000 (0:00:03.512) 0:06:14.824 ************ 2025-05-04 00:34:23.164362 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:23.393867 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:23.460630 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:23.524084 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:23.594716 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:23.699030 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:23.699239 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:23.700885 | orchestrator | 2025-05-04 00:34:23.701802 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-04 00:34:23.703537 | orchestrator | Sunday 04 May 2025 00:34:23 +0000 (0:00:00.657) 0:06:15.482 ************ 2025-05-04 00:34:23.787439 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-04 00:34:23.787672 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-04 00:34:23.861769 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:23.861963 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-04 00:34:23.863268 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-04 00:34:23.928972 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:23.929849 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-04 00:34:23.930892 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-04 00:34:23.996140 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:23.996728 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-04 00:34:23.997773 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-04 00:34:24.085826 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:24.087130 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-04 00:34:24.089830 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-04 00:34:24.155760 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:24.156868 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-04 00:34:24.157806 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-04 00:34:24.254664 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:24.255385 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-04 00:34:24.256353 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-04 00:34:24.257740 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:24.258282 | orchestrator | 2025-05-04 00:34:24.259024 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-04 00:34:24.259743 | orchestrator | Sunday 04 May 2025 00:34:24 +0000 (0:00:00.555) 0:06:16.038 ************ 2025-05-04 00:34:24.389992 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:24.453793 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:24.523941 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:24.588992 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:24.654318 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:24.753253 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:24.755472 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:24.755782 | orchestrator | 2025-05-04 00:34:24.757314 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-04 00:34:24.758501 | orchestrator | Sunday 04 May 2025 00:34:24 +0000 (0:00:00.499) 0:06:16.538 ************ 2025-05-04 00:34:24.884498 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:24.951133 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:25.019349 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:25.081350 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:25.151501 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:25.238674 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:25.239699 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:25.240766 | orchestrator | 2025-05-04 00:34:25.242232 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-04 00:34:25.242772 | orchestrator | Sunday 04 May 2025 00:34:25 +0000 (0:00:00.484) 0:06:17.022 ************ 2025-05-04 00:34:25.373143 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:25.435377 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:25.500173 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:25.569569 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:25.631290 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:25.743129 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:25.744520 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:25.746741 | orchestrator | 2025-05-04 00:34:25.747815 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-04 00:34:25.748742 | orchestrator | Sunday 04 May 2025 00:34:25 +0000 (0:00:00.504) 0:06:17.526 ************ 2025-05-04 00:34:31.620271 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:31.623817 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:31.623875 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:31.624799 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:31.624913 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:31.624950 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:31.625106 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:31.626272 | orchestrator | 2025-05-04 00:34:31.627743 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-04 00:34:31.629445 | orchestrator | Sunday 04 May 2025 00:34:31 +0000 (0:00:05.877) 0:06:23.403 ************ 2025-05-04 00:34:32.527885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:34:32.528569 | orchestrator | 2025-05-04 00:34:32.529118 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-04 00:34:32.530099 | orchestrator | Sunday 04 May 2025 00:34:32 +0000 (0:00:00.906) 0:06:24.310 ************ 2025-05-04 00:34:32.975886 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:33.385293 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:33.385915 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:33.386376 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:33.387514 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:33.388111 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:33.388741 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:33.389704 | orchestrator | 2025-05-04 00:34:33.390245 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-04 00:34:33.390545 | orchestrator | Sunday 04 May 2025 00:34:33 +0000 (0:00:00.859) 0:06:25.170 ************ 2025-05-04 00:34:34.435448 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:34.436322 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:34.437376 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:34.438247 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:34.439707 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:34.440483 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:34.441314 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:34.442480 | orchestrator | 2025-05-04 00:34:34.443183 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-04 00:34:34.443771 | orchestrator | Sunday 04 May 2025 00:34:34 +0000 (0:00:01.047) 0:06:26.217 ************ 2025-05-04 00:34:35.753279 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:35.754181 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:35.754233 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:35.755939 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:35.757654 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:35.758340 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:35.759369 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:35.760016 | orchestrator | 2025-05-04 00:34:35.760783 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-04 00:34:35.761822 | orchestrator | Sunday 04 May 2025 00:34:35 +0000 (0:00:01.317) 0:06:27.534 ************ 2025-05-04 00:34:35.887489 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:37.109104 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:34:37.109243 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:34:37.110504 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:34:37.114128 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:37.114223 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:37.114236 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:37.114245 | orchestrator | 2025-05-04 00:34:37.114256 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-04 00:34:37.114728 | orchestrator | Sunday 04 May 2025 00:34:37 +0000 (0:00:01.358) 0:06:28.893 ************ 2025-05-04 00:34:38.380411 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:38.380740 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:38.380779 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:38.380796 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:38.380820 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:38.381764 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:38.381831 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:38.382775 | orchestrator | 2025-05-04 00:34:38.384063 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-04 00:34:38.384128 | orchestrator | Sunday 04 May 2025 00:34:38 +0000 (0:00:01.266) 0:06:30.160 ************ 2025-05-04 00:34:39.776107 | orchestrator | changed: [testbed-manager] 2025-05-04 00:34:39.776576 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:39.777068 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:39.779552 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:39.779736 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:39.780204 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:39.780549 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:39.780932 | orchestrator | 2025-05-04 00:34:39.781291 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-04 00:34:39.781711 | orchestrator | Sunday 04 May 2025 00:34:39 +0000 (0:00:01.397) 0:06:31.557 ************ 2025-05-04 00:34:40.790697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:34:40.793822 | orchestrator | 2025-05-04 00:34:42.172135 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-04 00:34:42.172287 | orchestrator | Sunday 04 May 2025 00:34:40 +0000 (0:00:01.014) 0:06:32.572 ************ 2025-05-04 00:34:42.172327 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:42.172834 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:34:42.172874 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:34:42.174072 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:34:42.175860 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:42.176575 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:42.177733 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:42.178497 | orchestrator | 2025-05-04 00:34:42.179159 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-04 00:34:42.179716 | orchestrator | Sunday 04 May 2025 00:34:42 +0000 (0:00:01.382) 0:06:33.955 ************ 2025-05-04 00:34:43.308854 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:43.309440 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:34:43.310014 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:34:43.312054 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:34:43.313039 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:43.314900 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:43.315758 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:43.316440 | orchestrator | 2025-05-04 00:34:43.317217 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-04 00:34:43.318097 | orchestrator | Sunday 04 May 2025 00:34:43 +0000 (0:00:01.133) 0:06:35.089 ************ 2025-05-04 00:34:44.410749 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:44.411023 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:34:44.412119 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:34:44.416251 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:34:44.416704 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:44.417554 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:44.418348 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:44.418577 | orchestrator | 2025-05-04 00:34:44.419112 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-04 00:34:44.419619 | orchestrator | Sunday 04 May 2025 00:34:44 +0000 (0:00:01.102) 0:06:36.191 ************ 2025-05-04 00:34:45.789326 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:45.789761 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:34:45.790675 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:34:45.791959 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:34:45.793456 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:45.794241 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:45.795215 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:45.795836 | orchestrator | 2025-05-04 00:34:45.796358 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-04 00:34:45.796964 | orchestrator | Sunday 04 May 2025 00:34:45 +0000 (0:00:01.379) 0:06:37.571 ************ 2025-05-04 00:34:46.941059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:34:46.941857 | orchestrator | 2025-05-04 00:34:46.942962 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.945523 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.877) 0:06:38.449 ************ 2025-05-04 00:34:46.946639 | orchestrator | 2025-05-04 00:34:46.947666 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.948678 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.036) 0:06:38.485 ************ 2025-05-04 00:34:46.949431 | orchestrator | 2025-05-04 00:34:46.950501 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.951185 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.043) 0:06:38.528 ************ 2025-05-04 00:34:46.951688 | orchestrator | 2025-05-04 00:34:46.952410 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.952957 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.037) 0:06:38.565 ************ 2025-05-04 00:34:46.953516 | orchestrator | 2025-05-04 00:34:46.953940 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.954428 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.038) 0:06:38.604 ************ 2025-05-04 00:34:46.954925 | orchestrator | 2025-05-04 00:34:46.955389 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.956024 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.043) 0:06:38.647 ************ 2025-05-04 00:34:46.956559 | orchestrator | 2025-05-04 00:34:46.957045 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-04 00:34:46.957443 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.037) 0:06:38.685 ************ 2025-05-04 00:34:46.958153 | orchestrator | 2025-05-04 00:34:46.959006 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-04 00:34:48.021890 | orchestrator | Sunday 04 May 2025 00:34:46 +0000 (0:00:00.037) 0:06:38.722 ************ 2025-05-04 00:34:48.022153 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:48.022581 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:48.023405 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:48.024116 | orchestrator | 2025-05-04 00:34:48.024798 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-04 00:34:48.025577 | orchestrator | Sunday 04 May 2025 00:34:48 +0000 (0:00:01.080) 0:06:39.802 ************ 2025-05-04 00:34:49.851402 | orchestrator | changed: [testbed-manager] 2025-05-04 00:34:49.851747 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:49.853108 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:49.855778 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:49.856800 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:49.857431 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:49.858634 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:49.859398 | orchestrator | 2025-05-04 00:34:49.859745 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-04 00:34:49.860393 | orchestrator | Sunday 04 May 2025 00:34:49 +0000 (0:00:01.830) 0:06:41.632 ************ 2025-05-04 00:34:50.956235 | orchestrator | changed: [testbed-manager] 2025-05-04 00:34:50.959551 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:50.959709 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:50.961142 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:50.961768 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:50.961806 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:50.962504 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:50.963143 | orchestrator | 2025-05-04 00:34:50.964177 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-04 00:34:50.964556 | orchestrator | Sunday 04 May 2025 00:34:50 +0000 (0:00:01.104) 0:06:42.737 ************ 2025-05-04 00:34:51.086782 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:53.033472 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:53.035279 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:53.035560 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:53.038876 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:53.039829 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:53.040536 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:53.041112 | orchestrator | 2025-05-04 00:34:53.041894 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-04 00:34:53.043260 | orchestrator | Sunday 04 May 2025 00:34:53 +0000 (0:00:02.076) 0:06:44.814 ************ 2025-05-04 00:34:53.151286 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:53.151645 | orchestrator | 2025-05-04 00:34:53.151706 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-04 00:34:53.152031 | orchestrator | Sunday 04 May 2025 00:34:53 +0000 (0:00:00.120) 0:06:44.935 ************ 2025-05-04 00:34:54.138996 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:54.140158 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:34:54.141943 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:34:54.142498 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:34:54.143710 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:34:54.144840 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:34:54.145255 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:34:54.146418 | orchestrator | 2025-05-04 00:34:54.146769 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-04 00:34:54.147510 | orchestrator | Sunday 04 May 2025 00:34:54 +0000 (0:00:00.984) 0:06:45.919 ************ 2025-05-04 00:34:54.300312 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:54.363896 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:54.444715 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:54.695031 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:54.762396 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:54.891889 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:54.892886 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:54.893765 | orchestrator | 2025-05-04 00:34:54.894998 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-04 00:34:54.895511 | orchestrator | Sunday 04 May 2025 00:34:54 +0000 (0:00:00.753) 0:06:46.673 ************ 2025-05-04 00:34:55.818580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:34:55.818838 | orchestrator | 2025-05-04 00:34:55.819481 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-04 00:34:55.820285 | orchestrator | Sunday 04 May 2025 00:34:55 +0000 (0:00:00.928) 0:06:47.601 ************ 2025-05-04 00:34:56.225892 | orchestrator | ok: [testbed-manager] 2025-05-04 00:34:56.736810 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:34:56.737098 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:34:56.738204 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:34:56.739572 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:34:56.739734 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:34:56.742735 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:34:59.414368 | orchestrator | 2025-05-04 00:34:59.414581 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-04 00:34:59.414876 | orchestrator | Sunday 04 May 2025 00:34:56 +0000 (0:00:00.920) 0:06:48.522 ************ 2025-05-04 00:34:59.414918 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-04 00:34:59.415040 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-04 00:34:59.415774 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-04 00:34:59.416646 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-04 00:34:59.417759 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-04 00:34:59.419494 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-04 00:34:59.420372 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-04 00:34:59.421298 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-04 00:34:59.421843 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-04 00:34:59.423635 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-04 00:34:59.425100 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-04 00:34:59.425139 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-04 00:34:59.425160 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-04 00:34:59.428269 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-04 00:34:59.430176 | orchestrator | 2025-05-04 00:34:59.430796 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-04 00:34:59.431368 | orchestrator | Sunday 04 May 2025 00:34:59 +0000 (0:00:02.672) 0:06:51.195 ************ 2025-05-04 00:34:59.564067 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:34:59.633936 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:34:59.713203 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:34:59.782578 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:34:59.850814 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:34:59.961853 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:34:59.962925 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:34:59.963305 | orchestrator | 2025-05-04 00:34:59.963877 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-04 00:34:59.964431 | orchestrator | Sunday 04 May 2025 00:34:59 +0000 (0:00:00.551) 0:06:51.746 ************ 2025-05-04 00:35:00.837875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:35:00.838208 | orchestrator | 2025-05-04 00:35:00.839218 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-04 00:35:00.839571 | orchestrator | Sunday 04 May 2025 00:35:00 +0000 (0:00:00.872) 0:06:52.619 ************ 2025-05-04 00:35:01.284083 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:01.885996 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:01.888625 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:01.889163 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:01.889912 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:01.892643 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:01.893504 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:01.893554 | orchestrator | 2025-05-04 00:35:01.894736 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-04 00:35:01.895921 | orchestrator | Sunday 04 May 2025 00:35:01 +0000 (0:00:01.050) 0:06:53.669 ************ 2025-05-04 00:35:02.331001 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:02.729893 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:02.731392 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:02.732150 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:02.733032 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:02.734010 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:02.735115 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:02.736315 | orchestrator | 2025-05-04 00:35:02.737785 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-04 00:35:02.738688 | orchestrator | Sunday 04 May 2025 00:35:02 +0000 (0:00:00.838) 0:06:54.508 ************ 2025-05-04 00:35:02.862735 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:02.925316 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:02.990863 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:03.068707 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:03.135810 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:03.233156 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:03.233460 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:03.234300 | orchestrator | 2025-05-04 00:35:03.235646 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-04 00:35:03.236579 | orchestrator | Sunday 04 May 2025 00:35:03 +0000 (0:00:00.507) 0:06:55.016 ************ 2025-05-04 00:35:04.637709 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:04.638129 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:04.638925 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:04.639417 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:04.641285 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:04.642335 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:04.644018 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:04.644798 | orchestrator | 2025-05-04 00:35:04.648497 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-04 00:35:04.778686 | orchestrator | Sunday 04 May 2025 00:35:04 +0000 (0:00:01.404) 0:06:56.421 ************ 2025-05-04 00:35:04.778831 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:04.852374 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:04.916668 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:04.983490 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:05.062152 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:05.157777 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:05.159582 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:05.160357 | orchestrator | 2025-05-04 00:35:05.160391 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-04 00:35:05.160416 | orchestrator | Sunday 04 May 2025 00:35:05 +0000 (0:00:00.519) 0:06:56.941 ************ 2025-05-04 00:35:07.093218 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:07.094333 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:07.095104 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:07.095929 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:07.096741 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:07.097457 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:07.098252 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:07.098911 | orchestrator | 2025-05-04 00:35:07.099936 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-04 00:35:07.100676 | orchestrator | Sunday 04 May 2025 00:35:07 +0000 (0:00:01.931) 0:06:58.872 ************ 2025-05-04 00:35:08.449885 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:08.450251 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:08.450789 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:08.451464 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:08.452280 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:08.452464 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:08.452967 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:08.453447 | orchestrator | 2025-05-04 00:35:08.453967 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-04 00:35:08.454689 | orchestrator | Sunday 04 May 2025 00:35:08 +0000 (0:00:01.360) 0:07:00.233 ************ 2025-05-04 00:35:10.144278 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:10.145167 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:10.146416 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:10.147002 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:10.147042 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:10.147436 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:10.147961 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:10.148620 | orchestrator | 2025-05-04 00:35:10.149136 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-04 00:35:10.149532 | orchestrator | Sunday 04 May 2025 00:35:10 +0000 (0:00:01.689) 0:07:01.923 ************ 2025-05-04 00:35:11.767261 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:11.768101 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:11.769022 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:11.769736 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:11.771973 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:11.773816 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:11.773856 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:11.773879 | orchestrator | 2025-05-04 00:35:11.774729 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-04 00:35:11.775233 | orchestrator | Sunday 04 May 2025 00:35:11 +0000 (0:00:01.625) 0:07:03.549 ************ 2025-05-04 00:35:12.359087 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:12.424651 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:12.876835 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:12.877261 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:12.878272 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:12.881998 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:12.882585 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:12.883282 | orchestrator | 2025-05-04 00:35:12.883991 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-04 00:35:12.884697 | orchestrator | Sunday 04 May 2025 00:35:12 +0000 (0:00:01.110) 0:07:04.659 ************ 2025-05-04 00:35:13.001248 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:13.066104 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:13.130160 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:13.189955 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:13.257475 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:13.687683 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:13.688202 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:13.688929 | orchestrator | 2025-05-04 00:35:13.689911 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-04 00:35:13.690758 | orchestrator | Sunday 04 May 2025 00:35:13 +0000 (0:00:00.808) 0:07:05.468 ************ 2025-05-04 00:35:13.821734 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:13.883246 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:13.950284 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:14.014114 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:14.075390 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:14.189431 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:14.190223 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:14.191291 | orchestrator | 2025-05-04 00:35:14.192176 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-04 00:35:14.193121 | orchestrator | Sunday 04 May 2025 00:35:14 +0000 (0:00:00.506) 0:07:05.974 ************ 2025-05-04 00:35:14.336194 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:14.403112 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:14.470330 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:14.531657 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:14.602149 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:14.698353 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:14.698998 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:14.700315 | orchestrator | 2025-05-04 00:35:14.704093 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-04 00:35:15.000388 | orchestrator | Sunday 04 May 2025 00:35:14 +0000 (0:00:00.507) 0:07:06.481 ************ 2025-05-04 00:35:15.000528 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:15.063463 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:15.134704 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:15.198290 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:15.266831 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:15.367830 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:15.369167 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:15.370197 | orchestrator | 2025-05-04 00:35:15.371188 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-04 00:35:15.372117 | orchestrator | Sunday 04 May 2025 00:35:15 +0000 (0:00:00.669) 0:07:07.151 ************ 2025-05-04 00:35:15.514834 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:15.580728 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:15.650557 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:15.717742 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:15.781365 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:15.891032 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:15.891811 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:15.892124 | orchestrator | 2025-05-04 00:35:15.893041 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-04 00:35:15.893816 | orchestrator | Sunday 04 May 2025 00:35:15 +0000 (0:00:00.525) 0:07:07.677 ************ 2025-05-04 00:35:21.518374 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:21.518746 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:21.519345 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:21.520665 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:21.521359 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:21.521588 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:21.522106 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:21.522456 | orchestrator | 2025-05-04 00:35:21.523329 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-04 00:35:21.523619 | orchestrator | Sunday 04 May 2025 00:35:21 +0000 (0:00:05.623) 0:07:13.300 ************ 2025-05-04 00:35:21.649043 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:21.712208 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:21.776098 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:21.843804 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:21.904082 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:22.013367 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:22.013564 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:22.013715 | orchestrator | 2025-05-04 00:35:22.014978 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-04 00:35:22.019971 | orchestrator | Sunday 04 May 2025 00:35:22 +0000 (0:00:00.496) 0:07:13.797 ************ 2025-05-04 00:35:23.022074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:35:23.022291 | orchestrator | 2025-05-04 00:35:23.022322 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-04 00:35:23.023150 | orchestrator | Sunday 04 May 2025 00:35:23 +0000 (0:00:01.006) 0:07:14.804 ************ 2025-05-04 00:35:24.731199 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:24.734462 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:24.734565 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:24.736924 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:24.736971 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:24.738105 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:24.739205 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:24.740136 | orchestrator | 2025-05-04 00:35:24.741331 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-04 00:35:24.741981 | orchestrator | Sunday 04 May 2025 00:35:24 +0000 (0:00:01.708) 0:07:16.512 ************ 2025-05-04 00:35:25.841747 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:25.842514 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:25.843477 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:25.844668 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:25.845379 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:25.846338 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:25.847001 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:25.847782 | orchestrator | 2025-05-04 00:35:25.848504 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-04 00:35:25.848919 | orchestrator | Sunday 04 May 2025 00:35:25 +0000 (0:00:01.113) 0:07:17.625 ************ 2025-05-04 00:35:26.245221 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:26.640468 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:26.641372 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:26.641884 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:26.645537 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:26.645739 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:26.646514 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:26.646907 | orchestrator | 2025-05-04 00:35:26.647887 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-04 00:35:26.648555 | orchestrator | Sunday 04 May 2025 00:35:26 +0000 (0:00:00.797) 0:07:18.423 ************ 2025-05-04 00:35:28.512664 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.513114 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.514129 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.516394 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.516946 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.517527 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.518340 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-04 00:35:28.519582 | orchestrator | 2025-05-04 00:35:28.520360 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-04 00:35:28.521308 | orchestrator | Sunday 04 May 2025 00:35:28 +0000 (0:00:01.871) 0:07:20.295 ************ 2025-05-04 00:35:29.250726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:35:29.251212 | orchestrator | 2025-05-04 00:35:29.252156 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-04 00:35:29.252565 | orchestrator | Sunday 04 May 2025 00:35:29 +0000 (0:00:00.739) 0:07:21.034 ************ 2025-05-04 00:35:37.926303 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:37.926764 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:37.927162 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:37.930788 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:39.603704 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:39.603827 | orchestrator | changed: [testbed-manager] 2025-05-04 00:35:39.603842 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:39.603855 | orchestrator | 2025-05-04 00:35:39.603868 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-04 00:35:39.603881 | orchestrator | Sunday 04 May 2025 00:35:37 +0000 (0:00:08.673) 0:07:29.708 ************ 2025-05-04 00:35:39.603908 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:39.604394 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:39.607689 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:39.608410 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:39.608433 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:39.608450 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:39.609335 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:39.610107 | orchestrator | 2025-05-04 00:35:39.610868 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-04 00:35:39.611541 | orchestrator | Sunday 04 May 2025 00:35:39 +0000 (0:00:01.677) 0:07:31.385 ************ 2025-05-04 00:35:40.898764 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:40.899623 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:40.900483 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:40.901833 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:40.902456 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:40.902925 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:40.903432 | orchestrator | 2025-05-04 00:35:40.904021 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-04 00:35:40.904651 | orchestrator | Sunday 04 May 2025 00:35:40 +0000 (0:00:01.293) 0:07:32.679 ************ 2025-05-04 00:35:42.437369 | orchestrator | changed: [testbed-manager] 2025-05-04 00:35:42.437759 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:42.438637 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:42.439634 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:42.440164 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:42.444799 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:42.444995 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:42.445023 | orchestrator | 2025-05-04 00:35:42.445040 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-04 00:35:42.445060 | orchestrator | 2025-05-04 00:35:42.445645 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-04 00:35:42.446133 | orchestrator | Sunday 04 May 2025 00:35:42 +0000 (0:00:01.542) 0:07:34.222 ************ 2025-05-04 00:35:42.576963 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:42.641904 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:42.718253 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:42.783290 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:42.847548 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:42.981592 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:42.982251 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:42.982508 | orchestrator | 2025-05-04 00:35:42.984202 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-04 00:35:42.984636 | orchestrator | 2025-05-04 00:35:42.985035 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-04 00:35:42.985479 | orchestrator | Sunday 04 May 2025 00:35:42 +0000 (0:00:00.545) 0:07:34.767 ************ 2025-05-04 00:35:44.316666 | orchestrator | changed: [testbed-manager] 2025-05-04 00:35:44.317368 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:44.318512 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:44.319676 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:44.320291 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:44.321344 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:44.321884 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:44.323165 | orchestrator | 2025-05-04 00:35:44.324174 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-04 00:35:44.324943 | orchestrator | Sunday 04 May 2025 00:35:44 +0000 (0:00:01.332) 0:07:36.099 ************ 2025-05-04 00:35:45.700209 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:45.700400 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:45.702935 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:45.703591 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:45.703670 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:45.704683 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:45.705428 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:45.705762 | orchestrator | 2025-05-04 00:35:45.707110 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-04 00:35:45.707200 | orchestrator | Sunday 04 May 2025 00:35:45 +0000 (0:00:01.382) 0:07:37.482 ************ 2025-05-04 00:35:45.837235 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:35:46.063254 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:35:46.128123 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:35:46.191477 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:35:46.259558 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:35:46.635224 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:35:46.636159 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:35:46.636724 | orchestrator | 2025-05-04 00:35:46.640024 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-04 00:35:47.814362 | orchestrator | Sunday 04 May 2025 00:35:46 +0000 (0:00:00.935) 0:07:38.418 ************ 2025-05-04 00:35:47.814500 | orchestrator | changed: [testbed-manager] 2025-05-04 00:35:47.814560 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:47.814889 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:47.815436 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:47.815618 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:47.816114 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:47.816546 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:47.817126 | orchestrator | 2025-05-04 00:35:47.817391 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-04 00:35:47.817966 | orchestrator | 2025-05-04 00:35:47.818773 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-04 00:35:47.819817 | orchestrator | Sunday 04 May 2025 00:35:47 +0000 (0:00:01.180) 0:07:39.598 ************ 2025-05-04 00:35:48.762351 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:35:48.763683 | orchestrator | 2025-05-04 00:35:48.764669 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-04 00:35:48.764701 | orchestrator | Sunday 04 May 2025 00:35:48 +0000 (0:00:00.944) 0:07:40.543 ************ 2025-05-04 00:35:49.308270 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:49.711565 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:49.712106 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:49.712794 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:49.715480 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:49.716772 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:49.716801 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:49.717131 | orchestrator | 2025-05-04 00:35:49.718690 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-04 00:35:49.719100 | orchestrator | Sunday 04 May 2025 00:35:49 +0000 (0:00:00.948) 0:07:41.492 ************ 2025-05-04 00:35:50.842973 | orchestrator | changed: [testbed-manager] 2025-05-04 00:35:50.843525 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:50.843565 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:50.844037 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:50.847673 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:51.783786 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:51.783953 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:51.783976 | orchestrator | 2025-05-04 00:35:51.783994 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-04 00:35:51.784010 | orchestrator | Sunday 04 May 2025 00:35:50 +0000 (0:00:01.132) 0:07:42.624 ************ 2025-05-04 00:35:51.784042 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:35:51.787088 | orchestrator | 2025-05-04 00:35:51.788717 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-04 00:35:51.791399 | orchestrator | Sunday 04 May 2025 00:35:51 +0000 (0:00:00.940) 0:07:43.564 ************ 2025-05-04 00:35:52.189997 | orchestrator | ok: [testbed-manager] 2025-05-04 00:35:52.640651 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:35:52.641370 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:35:52.641424 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:35:52.642469 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:35:52.643455 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:35:52.645312 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:35:52.646268 | orchestrator | 2025-05-04 00:35:52.646858 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-04 00:35:52.647583 | orchestrator | Sunday 04 May 2025 00:35:52 +0000 (0:00:00.858) 0:07:44.423 ************ 2025-05-04 00:35:53.090559 | orchestrator | changed: [testbed-manager] 2025-05-04 00:35:53.763988 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:35:53.765065 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:35:53.765116 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:35:53.765759 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:35:53.766967 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:35:53.767829 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:35:53.769119 | orchestrator | 2025-05-04 00:35:53.769714 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:35:53.770114 | orchestrator | 2025-05-04 00:35:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:35:53.771203 | orchestrator | 2025-05-04 00:35:53 | INFO  | Please wait and do not abort execution. 2025-05-04 00:35:53.771237 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-04 00:35:53.772092 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-04 00:35:53.772963 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-04 00:35:53.773760 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-04 00:35:53.774570 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-04 00:35:53.775597 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-04 00:35:53.775831 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-04 00:35:53.776549 | orchestrator | 2025-05-04 00:35:53.777271 | orchestrator | Sunday 04 May 2025 00:35:53 +0000 (0:00:01.123) 0:07:45.546 ************ 2025-05-04 00:35:53.778103 | orchestrator | =============================================================================== 2025-05-04 00:35:53.779093 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.03s 2025-05-04 00:35:53.779680 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.98s 2025-05-04 00:35:53.780660 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.82s 2025-05-04 00:35:53.781060 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.73s 2025-05-04 00:35:53.781816 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.26s 2025-05-04 00:35:53.782501 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.19s 2025-05-04 00:35:53.783183 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.16s 2025-05-04 00:35:53.783794 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.17s 2025-05-04 00:35:53.784281 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.31s 2025-05-04 00:35:53.784819 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.67s 2025-05-04 00:35:53.785521 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.18s 2025-05-04 00:35:53.785922 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.53s 2025-05-04 00:35:53.786431 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.46s 2025-05-04 00:35:53.786929 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.84s 2025-05-04 00:35:53.787534 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 6.83s 2025-05-04 00:35:53.787767 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.88s 2025-05-04 00:35:53.788496 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.88s 2025-05-04 00:35:53.789897 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.69s 2025-05-04 00:35:53.790186 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.62s 2025-05-04 00:35:53.790919 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.05s 2025-05-04 00:35:54.752646 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-04 00:35:56.600176 | orchestrator | + osism apply network 2025-05-04 00:35:56.600331 | orchestrator | 2025-05-04 00:35:56 | INFO  | Task 9f715158-c232-452d-a6c1-a1143894188b (network) was prepared for execution. 2025-05-04 00:35:59.806494 | orchestrator | 2025-05-04 00:35:56 | INFO  | It takes a moment until task 9f715158-c232-452d-a6c1-a1143894188b (network) has been started and output is visible here. 2025-05-04 00:35:59.806717 | orchestrator | 2025-05-04 00:35:59.807043 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-04 00:35:59.807945 | orchestrator | 2025-05-04 00:35:59.808659 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-04 00:35:59.810145 | orchestrator | Sunday 04 May 2025 00:35:59 +0000 (0:00:00.214) 0:00:00.214 ************ 2025-05-04 00:35:59.951203 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:00.025826 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:00.099816 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:00.174642 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:00.247719 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:00.466486 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:00.466993 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:00.468705 | orchestrator | 2025-05-04 00:36:00.469443 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-04 00:36:00.470973 | orchestrator | Sunday 04 May 2025 00:36:00 +0000 (0:00:00.659) 0:00:00.874 ************ 2025-05-04 00:36:01.599065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:36:01.602741 | orchestrator | 2025-05-04 00:36:03.510470 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-04 00:36:03.510656 | orchestrator | Sunday 04 May 2025 00:36:01 +0000 (0:00:01.130) 0:00:02.004 ************ 2025-05-04 00:36:03.510752 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:03.512014 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:03.512273 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:03.514199 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:03.515015 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:03.515762 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:03.516513 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:03.517277 | orchestrator | 2025-05-04 00:36:03.517996 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-04 00:36:03.518757 | orchestrator | Sunday 04 May 2025 00:36:03 +0000 (0:00:01.909) 0:00:03.914 ************ 2025-05-04 00:36:05.520003 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:05.524746 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:05.524834 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:05.525665 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:05.525681 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:05.525693 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:05.526287 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:05.526651 | orchestrator | 2025-05-04 00:36:05.527288 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-04 00:36:05.527808 | orchestrator | Sunday 04 May 2025 00:36:05 +0000 (0:00:02.011) 0:00:05.925 ************ 2025-05-04 00:36:06.033946 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-04 00:36:06.034889 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-04 00:36:06.697282 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-04 00:36:06.697726 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-04 00:36:06.699255 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-04 00:36:06.704107 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-04 00:36:06.704530 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-04 00:36:06.705199 | orchestrator | 2025-05-04 00:36:06.705877 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-04 00:36:06.707436 | orchestrator | Sunday 04 May 2025 00:36:06 +0000 (0:00:01.178) 0:00:07.103 ************ 2025-05-04 00:36:08.443464 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 00:36:08.443961 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-04 00:36:08.444887 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 00:36:08.445724 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-04 00:36:08.446788 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-04 00:36:08.447269 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-04 00:36:08.448147 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-04 00:36:08.448512 | orchestrator | 2025-05-04 00:36:08.449456 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-04 00:36:08.449930 | orchestrator | Sunday 04 May 2025 00:36:08 +0000 (0:00:01.749) 0:00:08.853 ************ 2025-05-04 00:36:10.199075 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:10.199293 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:36:10.199900 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:36:10.199930 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:36:10.200394 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:36:10.201421 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:36:10.203086 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:36:10.205317 | orchestrator | 2025-05-04 00:36:10.730546 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-04 00:36:10.730748 | orchestrator | Sunday 04 May 2025 00:36:10 +0000 (0:00:01.749) 0:00:10.602 ************ 2025-05-04 00:36:10.730786 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 00:36:11.172752 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 00:36:11.173222 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-04 00:36:11.176717 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-04 00:36:11.179591 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-04 00:36:11.180413 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-04 00:36:11.180449 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-04 00:36:11.182417 | orchestrator | 2025-05-04 00:36:11.183325 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-04 00:36:11.184040 | orchestrator | Sunday 04 May 2025 00:36:11 +0000 (0:00:00.979) 0:00:11.582 ************ 2025-05-04 00:36:11.611294 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:11.699260 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:12.315001 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:12.315534 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:12.315645 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:12.315719 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:12.316083 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:12.316551 | orchestrator | 2025-05-04 00:36:12.316965 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-04 00:36:12.317421 | orchestrator | Sunday 04 May 2025 00:36:12 +0000 (0:00:01.138) 0:00:12.720 ************ 2025-05-04 00:36:12.485700 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:36:12.567889 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:36:12.647148 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:36:12.723055 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:36:12.803459 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:36:13.087982 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:36:13.090143 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:36:13.091172 | orchestrator | 2025-05-04 00:36:13.091865 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-04 00:36:13.092474 | orchestrator | Sunday 04 May 2025 00:36:13 +0000 (0:00:00.774) 0:00:13.495 ************ 2025-05-04 00:36:15.131382 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:15.131862 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:15.135966 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:15.137252 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:15.138471 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:15.139002 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:15.140534 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:15.141325 | orchestrator | 2025-05-04 00:36:15.141732 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-04 00:36:15.142487 | orchestrator | Sunday 04 May 2025 00:36:15 +0000 (0:00:02.045) 0:00:15.540 ************ 2025-05-04 00:36:17.003692 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-04 00:36:17.004013 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.004711 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.006994 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.007554 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.007599 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.008459 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.009461 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-04 00:36:17.009989 | orchestrator | 2025-05-04 00:36:17.011009 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-04 00:36:17.011905 | orchestrator | Sunday 04 May 2025 00:36:16 +0000 (0:00:01.868) 0:00:17.408 ************ 2025-05-04 00:36:18.526388 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:18.526587 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:36:18.526702 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:36:18.526720 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:36:18.526744 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:36:18.527641 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:36:18.528775 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:36:18.529399 | orchestrator | 2025-05-04 00:36:18.530237 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-04 00:36:18.530875 | orchestrator | Sunday 04 May 2025 00:36:18 +0000 (0:00:01.522) 0:00:18.931 ************ 2025-05-04 00:36:19.917137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:36:19.917436 | orchestrator | 2025-05-04 00:36:19.918380 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-04 00:36:19.919382 | orchestrator | Sunday 04 May 2025 00:36:19 +0000 (0:00:01.392) 0:00:20.324 ************ 2025-05-04 00:36:20.887787 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:20.889376 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:20.890205 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:20.890287 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:20.890357 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:20.890772 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:20.891316 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:20.891678 | orchestrator | 2025-05-04 00:36:20.892183 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-04 00:36:20.892582 | orchestrator | Sunday 04 May 2025 00:36:20 +0000 (0:00:00.969) 0:00:21.293 ************ 2025-05-04 00:36:21.050700 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:21.131594 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:21.375891 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:21.458344 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:21.542505 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:21.681460 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:21.682121 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:21.683132 | orchestrator | 2025-05-04 00:36:21.685567 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-04 00:36:22.033472 | orchestrator | Sunday 04 May 2025 00:36:21 +0000 (0:00:00.793) 0:00:22.087 ************ 2025-05-04 00:36:22.033684 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.126519 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.126785 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.126875 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.220798 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.221001 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.654195 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.655241 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.655849 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.656442 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.657688 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.658093 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.659059 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-04 00:36:22.659371 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-04 00:36:22.660076 | orchestrator | 2025-05-04 00:36:22.661749 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-04 00:36:22.983320 | orchestrator | Sunday 04 May 2025 00:36:22 +0000 (0:00:00.976) 0:00:23.063 ************ 2025-05-04 00:36:22.983544 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:36:23.065059 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:36:23.144583 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:36:23.224201 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:36:23.302374 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:36:24.438385 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:36:24.439505 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:36:24.440221 | orchestrator | 2025-05-04 00:36:24.440253 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-04 00:36:24.441069 | orchestrator | Sunday 04 May 2025 00:36:24 +0000 (0:00:01.779) 0:00:24.843 ************ 2025-05-04 00:36:24.603848 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:36:24.688400 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:36:24.949331 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:36:25.031058 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:36:25.114869 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:36:25.153049 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:36:25.153534 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:36:25.153819 | orchestrator | 2025-05-04 00:36:25.154977 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:36:25.156050 | orchestrator | 2025-05-04 00:36:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:36:25.156328 | orchestrator | 2025-05-04 00:36:25 | INFO  | Please wait and do not abort execution. 2025-05-04 00:36:25.156363 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.157697 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.158722 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.159894 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.160416 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.161184 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.161591 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:36:25.162357 | orchestrator | 2025-05-04 00:36:25.162986 | orchestrator | Sunday 04 May 2025 00:36:25 +0000 (0:00:00.720) 0:00:25.564 ************ 2025-05-04 00:36:25.163449 | orchestrator | =============================================================================== 2025-05-04 00:36:25.163955 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.05s 2025-05-04 00:36:25.164462 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.01s 2025-05-04 00:36:25.164996 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.91s 2025-05-04 00:36:25.166109 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.87s 2025-05-04 00:36:25.166783 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.78s 2025-05-04 00:36:25.167231 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.75s 2025-05-04 00:36:25.167686 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.75s 2025-05-04 00:36:25.168137 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.52s 2025-05-04 00:36:25.168774 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2025-05-04 00:36:25.169238 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2025-05-04 00:36:25.169776 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2025-05-04 00:36:25.170302 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.13s 2025-05-04 00:36:25.170746 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.98s 2025-05-04 00:36:25.171424 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.98s 2025-05-04 00:36:25.171541 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-05-04 00:36:25.171792 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.79s 2025-05-04 00:36:25.172185 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.77s 2025-05-04 00:36:25.172736 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.72s 2025-05-04 00:36:25.172994 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.66s 2025-05-04 00:36:25.652116 | orchestrator | + osism apply wireguard 2025-05-04 00:36:27.045441 | orchestrator | 2025-05-04 00:36:27 | INFO  | Task 58ddba7f-63b9-41c1-a335-3e4bc57181d0 (wireguard) was prepared for execution. 2025-05-04 00:36:30.087069 | orchestrator | 2025-05-04 00:36:27 | INFO  | It takes a moment until task 58ddba7f-63b9-41c1-a335-3e4bc57181d0 (wireguard) has been started and output is visible here. 2025-05-04 00:36:30.087230 | orchestrator | 2025-05-04 00:36:30.088926 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-04 00:36:30.089041 | orchestrator | 2025-05-04 00:36:30.089821 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-04 00:36:30.090592 | orchestrator | Sunday 04 May 2025 00:36:30 +0000 (0:00:00.158) 0:00:00.158 ************ 2025-05-04 00:36:31.541994 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:31.542667 | orchestrator | 2025-05-04 00:36:31.543191 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-04 00:36:31.543996 | orchestrator | Sunday 04 May 2025 00:36:31 +0000 (0:00:01.457) 0:00:01.616 ************ 2025-05-04 00:36:37.877050 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:37.877338 | orchestrator | 2025-05-04 00:36:37.878214 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-04 00:36:37.879564 | orchestrator | Sunday 04 May 2025 00:36:37 +0000 (0:00:06.334) 0:00:07.950 ************ 2025-05-04 00:36:38.413742 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:38.413932 | orchestrator | 2025-05-04 00:36:38.414505 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-04 00:36:38.415233 | orchestrator | Sunday 04 May 2025 00:36:38 +0000 (0:00:00.535) 0:00:08.486 ************ 2025-05-04 00:36:38.825814 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:38.826241 | orchestrator | 2025-05-04 00:36:38.826733 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-04 00:36:38.827306 | orchestrator | Sunday 04 May 2025 00:36:38 +0000 (0:00:00.415) 0:00:08.902 ************ 2025-05-04 00:36:39.308869 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:39.310070 | orchestrator | 2025-05-04 00:36:39.311167 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-04 00:36:39.311848 | orchestrator | Sunday 04 May 2025 00:36:39 +0000 (0:00:00.479) 0:00:09.381 ************ 2025-05-04 00:36:39.834217 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:39.834991 | orchestrator | 2025-05-04 00:36:39.837599 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-04 00:36:39.838151 | orchestrator | Sunday 04 May 2025 00:36:39 +0000 (0:00:00.528) 0:00:09.910 ************ 2025-05-04 00:36:40.264173 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:41.444818 | orchestrator | 2025-05-04 00:36:41.444990 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-04 00:36:41.445012 | orchestrator | Sunday 04 May 2025 00:36:40 +0000 (0:00:00.427) 0:00:10.337 ************ 2025-05-04 00:36:41.445045 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:41.445919 | orchestrator | 2025-05-04 00:36:41.446796 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-04 00:36:41.448510 | orchestrator | Sunday 04 May 2025 00:36:41 +0000 (0:00:01.181) 0:00:11.518 ************ 2025-05-04 00:36:42.361911 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-04 00:36:42.362178 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:42.363126 | orchestrator | 2025-05-04 00:36:42.363160 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-04 00:36:42.363676 | orchestrator | Sunday 04 May 2025 00:36:42 +0000 (0:00:00.914) 0:00:12.433 ************ 2025-05-04 00:36:44.001011 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:44.002593 | orchestrator | 2025-05-04 00:36:44.002720 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-04 00:36:44.003864 | orchestrator | Sunday 04 May 2025 00:36:43 +0000 (0:00:01.639) 0:00:14.073 ************ 2025-05-04 00:36:44.938736 | orchestrator | changed: [testbed-manager] 2025-05-04 00:36:44.940093 | orchestrator | 2025-05-04 00:36:44.940525 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:36:44.941154 | orchestrator | 2025-05-04 00:36:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:36:44.941446 | orchestrator | 2025-05-04 00:36:44 | INFO  | Please wait and do not abort execution. 2025-05-04 00:36:44.943442 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:36:44.943967 | orchestrator | 2025-05-04 00:36:44.946363 | orchestrator | Sunday 04 May 2025 00:36:44 +0000 (0:00:00.935) 0:00:15.009 ************ 2025-05-04 00:36:44.948231 | orchestrator | =============================================================================== 2025-05-04 00:36:44.948837 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.33s 2025-05-04 00:36:44.949932 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2025-05-04 00:36:44.950695 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.46s 2025-05-04 00:36:44.951538 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-05-04 00:36:44.951914 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-05-04 00:36:44.952642 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-05-04 00:36:44.953516 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-05-04 00:36:44.954198 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-05-04 00:36:44.954845 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2025-05-04 00:36:44.955741 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-05-04 00:36:44.956454 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-05-04 00:36:45.786839 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-04 00:36:45.826485 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-04 00:36:45.826781 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-04 00:36:45.912546 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 172 0 --:--:-- --:--:-- --:--:-- 174 2025-05-04 00:36:45.928184 | orchestrator | + osism apply --environment custom workarounds 2025-05-04 00:36:47.348130 | orchestrator | 2025-05-04 00:36:47 | INFO  | Trying to run play workarounds in environment custom 2025-05-04 00:36:47.398766 | orchestrator | 2025-05-04 00:36:47 | INFO  | Task 6839e2d9-13bd-405d-8b4f-1edabe2c8076 (workarounds) was prepared for execution. 2025-05-04 00:36:51.147017 | orchestrator | 2025-05-04 00:36:47 | INFO  | It takes a moment until task 6839e2d9-13bd-405d-8b4f-1edabe2c8076 (workarounds) has been started and output is visible here. 2025-05-04 00:36:51.147177 | orchestrator | 2025-05-04 00:36:51.148227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:36:51.149451 | orchestrator | 2025-05-04 00:36:51.149503 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-04 00:36:51.151715 | orchestrator | Sunday 04 May 2025 00:36:51 +0000 (0:00:00.185) 0:00:00.185 ************ 2025-05-04 00:36:51.327166 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-04 00:36:51.434483 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-04 00:36:51.535512 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-04 00:36:51.628237 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-04 00:36:51.716787 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-04 00:36:52.008864 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-04 00:36:52.010134 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-04 00:36:52.011489 | orchestrator | 2025-05-04 00:36:52.012577 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-04 00:36:52.013597 | orchestrator | 2025-05-04 00:36:52.013892 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-04 00:36:52.014513 | orchestrator | Sunday 04 May 2025 00:36:52 +0000 (0:00:00.864) 0:00:01.050 ************ 2025-05-04 00:36:54.727918 | orchestrator | ok: [testbed-manager] 2025-05-04 00:36:54.728571 | orchestrator | 2025-05-04 00:36:54.728647 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-04 00:36:54.728683 | orchestrator | 2025-05-04 00:36:54.728939 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-04 00:36:54.729250 | orchestrator | Sunday 04 May 2025 00:36:54 +0000 (0:00:02.714) 0:00:03.764 ************ 2025-05-04 00:36:56.544211 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:36:56.545790 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:36:56.546333 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:36:56.547003 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:36:56.550602 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:36:56.551442 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:36:56.552239 | orchestrator | 2025-05-04 00:36:56.552957 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-04 00:36:56.553555 | orchestrator | 2025-05-04 00:36:56.556838 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-04 00:36:58.018086 | orchestrator | Sunday 04 May 2025 00:36:56 +0000 (0:00:01.820) 0:00:05.585 ************ 2025-05-04 00:36:58.018244 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-04 00:36:58.018505 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-04 00:36:58.019316 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-04 00:36:58.019979 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-04 00:36:58.021694 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-04 00:36:58.022080 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-04 00:36:58.022686 | orchestrator | 2025-05-04 00:36:58.023258 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-04 00:36:58.023875 | orchestrator | Sunday 04 May 2025 00:36:58 +0000 (0:00:01.469) 0:00:07.054 ************ 2025-05-04 00:37:01.775603 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:37:01.775840 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:37:01.776801 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:37:01.777574 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:37:01.779669 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:37:01.780544 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:37:01.781443 | orchestrator | 2025-05-04 00:37:01.782679 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-04 00:37:01.782828 | orchestrator | Sunday 04 May 2025 00:37:01 +0000 (0:00:03.761) 0:00:10.816 ************ 2025-05-04 00:37:01.957666 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:37:02.044697 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:37:02.130307 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:37:02.424177 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:37:02.581237 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:37:02.581405 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:37:02.582383 | orchestrator | 2025-05-04 00:37:02.583165 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-04 00:37:02.586707 | orchestrator | 2025-05-04 00:37:04.259251 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-04 00:37:04.259387 | orchestrator | Sunday 04 May 2025 00:37:02 +0000 (0:00:00.805) 0:00:11.621 ************ 2025-05-04 00:37:04.259424 | orchestrator | changed: [testbed-manager] 2025-05-04 00:37:04.260767 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:37:04.260814 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:37:04.262775 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:37:04.264076 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:37:04.264155 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:37:04.264783 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:37:04.265511 | orchestrator | 2025-05-04 00:37:04.266447 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-04 00:37:04.267293 | orchestrator | Sunday 04 May 2025 00:37:04 +0000 (0:00:01.678) 0:00:13.299 ************ 2025-05-04 00:37:06.058322 | orchestrator | changed: [testbed-manager] 2025-05-04 00:37:06.058892 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:37:06.060486 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:37:06.061152 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:37:06.061799 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:37:06.062349 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:37:06.066698 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:37:06.067027 | orchestrator | 2025-05-04 00:37:06.067383 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-04 00:37:06.068111 | orchestrator | Sunday 04 May 2025 00:37:06 +0000 (0:00:01.795) 0:00:15.095 ************ 2025-05-04 00:37:07.614589 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:37:07.617841 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:37:07.617901 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:37:07.618400 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:37:07.619126 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:37:07.619810 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:37:07.620613 | orchestrator | ok: [testbed-manager] 2025-05-04 00:37:07.621101 | orchestrator | 2025-05-04 00:37:07.621712 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-04 00:37:07.622334 | orchestrator | Sunday 04 May 2025 00:37:07 +0000 (0:00:01.557) 0:00:16.652 ************ 2025-05-04 00:37:09.511753 | orchestrator | changed: [testbed-manager] 2025-05-04 00:37:09.513892 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:37:09.515241 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:37:09.515282 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:37:09.516676 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:37:09.516885 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:37:09.517562 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:37:09.518281 | orchestrator | 2025-05-04 00:37:09.518974 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-04 00:37:09.519415 | orchestrator | Sunday 04 May 2025 00:37:09 +0000 (0:00:01.898) 0:00:18.551 ************ 2025-05-04 00:37:09.701906 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:37:09.788233 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:37:09.874722 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:37:09.955571 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:37:10.287084 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:37:10.439368 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:37:10.439539 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:37:10.440063 | orchestrator | 2025-05-04 00:37:10.440358 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-04 00:37:10.442918 | orchestrator | 2025-05-04 00:37:10.443611 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-04 00:37:10.444415 | orchestrator | Sunday 04 May 2025 00:37:10 +0000 (0:00:00.930) 0:00:19.481 ************ 2025-05-04 00:37:12.868800 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:37:12.869297 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:37:12.869852 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:37:12.871020 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:37:12.872408 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:37:12.872645 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:37:12.874111 | orchestrator | ok: [testbed-manager] 2025-05-04 00:37:12.874331 | orchestrator | 2025-05-04 00:37:12.874762 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:37:12.876226 | orchestrator | 2025-05-04 00:37:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:37:12.876522 | orchestrator | 2025-05-04 00:37:12 | INFO  | Please wait and do not abort execution. 2025-05-04 00:37:12.876571 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:37:12.876687 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:12.877568 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:12.878151 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:12.878658 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:12.879043 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:12.879480 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:12.879846 | orchestrator | 2025-05-04 00:37:12.880241 | orchestrator | Sunday 04 May 2025 00:37:12 +0000 (0:00:02.428) 0:00:21.909 ************ 2025-05-04 00:37:12.880654 | orchestrator | =============================================================================== 2025-05-04 00:37:12.881057 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-05-04 00:37:12.881387 | orchestrator | Apply netplan configuration --------------------------------------------- 2.71s 2025-05-04 00:37:12.881783 | orchestrator | Install python3-docker -------------------------------------------------- 2.43s 2025-05-04 00:37:12.882261 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.90s 2025-05-04 00:37:12.882612 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-05-04 00:37:12.882974 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.80s 2025-05-04 00:37:12.883311 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.68s 2025-05-04 00:37:12.883730 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2025-05-04 00:37:12.885502 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-05-04 00:37:12.886432 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.93s 2025-05-04 00:37:12.886824 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2025-05-04 00:37:12.887333 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.81s 2025-05-04 00:37:13.654921 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-04 00:37:15.064169 | orchestrator | 2025-05-04 00:37:15 | INFO  | Task 62b9560e-18be-4036-b881-b23e208563e5 (reboot) was prepared for execution. 2025-05-04 00:37:18.242237 | orchestrator | 2025-05-04 00:37:15 | INFO  | It takes a moment until task 62b9560e-18be-4036-b881-b23e208563e5 (reboot) has been started and output is visible here. 2025-05-04 00:37:18.242400 | orchestrator | 2025-05-04 00:37:18.242887 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-04 00:37:18.242930 | orchestrator | 2025-05-04 00:37:18.243880 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-04 00:37:18.244122 | orchestrator | Sunday 04 May 2025 00:37:18 +0000 (0:00:00.173) 0:00:00.174 ************ 2025-05-04 00:37:18.341534 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:37:18.341945 | orchestrator | 2025-05-04 00:37:18.342369 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-04 00:37:18.343062 | orchestrator | Sunday 04 May 2025 00:37:18 +0000 (0:00:00.101) 0:00:00.275 ************ 2025-05-04 00:37:19.248680 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:37:19.248872 | orchestrator | 2025-05-04 00:37:19.249398 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-04 00:37:19.251727 | orchestrator | Sunday 04 May 2025 00:37:19 +0000 (0:00:00.905) 0:00:01.181 ************ 2025-05-04 00:37:19.368230 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:37:19.368673 | orchestrator | 2025-05-04 00:37:19.368715 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-04 00:37:19.368739 | orchestrator | 2025-05-04 00:37:19.369048 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-04 00:37:19.369656 | orchestrator | Sunday 04 May 2025 00:37:19 +0000 (0:00:00.113) 0:00:01.294 ************ 2025-05-04 00:37:19.474794 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:37:19.474967 | orchestrator | 2025-05-04 00:37:19.475062 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-04 00:37:19.475366 | orchestrator | Sunday 04 May 2025 00:37:19 +0000 (0:00:00.113) 0:00:01.408 ************ 2025-05-04 00:37:20.110361 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:37:20.110548 | orchestrator | 2025-05-04 00:37:20.111140 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-04 00:37:20.112112 | orchestrator | Sunday 04 May 2025 00:37:20 +0000 (0:00:00.635) 0:00:02.043 ************ 2025-05-04 00:37:20.226586 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:37:20.227040 | orchestrator | 2025-05-04 00:37:20.228577 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-04 00:37:20.229571 | orchestrator | 2025-05-04 00:37:20.230609 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-04 00:37:20.231453 | orchestrator | Sunday 04 May 2025 00:37:20 +0000 (0:00:00.113) 0:00:02.157 ************ 2025-05-04 00:37:20.318474 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:37:20.319666 | orchestrator | 2025-05-04 00:37:20.320256 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-04 00:37:20.321095 | orchestrator | Sunday 04 May 2025 00:37:20 +0000 (0:00:00.093) 0:00:02.250 ************ 2025-05-04 00:37:21.078183 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:37:21.078433 | orchestrator | 2025-05-04 00:37:21.078463 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-04 00:37:21.078508 | orchestrator | Sunday 04 May 2025 00:37:21 +0000 (0:00:00.759) 0:00:03.010 ************ 2025-05-04 00:37:21.206209 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:37:21.206402 | orchestrator | 2025-05-04 00:37:21.206911 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-04 00:37:21.208670 | orchestrator | 2025-05-04 00:37:21.208750 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-04 00:37:21.210135 | orchestrator | Sunday 04 May 2025 00:37:21 +0000 (0:00:00.125) 0:00:03.135 ************ 2025-05-04 00:37:21.292537 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:37:21.293239 | orchestrator | 2025-05-04 00:37:21.294530 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-04 00:37:21.295704 | orchestrator | Sunday 04 May 2025 00:37:21 +0000 (0:00:00.089) 0:00:03.225 ************ 2025-05-04 00:37:21.958507 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:37:21.958867 | orchestrator | 2025-05-04 00:37:21.960034 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-04 00:37:21.960812 | orchestrator | Sunday 04 May 2025 00:37:21 +0000 (0:00:00.666) 0:00:03.891 ************ 2025-05-04 00:37:22.079377 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:37:22.080994 | orchestrator | 2025-05-04 00:37:22.081618 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-04 00:37:22.082233 | orchestrator | 2025-05-04 00:37:22.082673 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-04 00:37:22.083109 | orchestrator | Sunday 04 May 2025 00:37:22 +0000 (0:00:00.112) 0:00:04.004 ************ 2025-05-04 00:37:22.191867 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:37:22.192346 | orchestrator | 2025-05-04 00:37:22.193099 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-04 00:37:22.193743 | orchestrator | Sunday 04 May 2025 00:37:22 +0000 (0:00:00.120) 0:00:04.124 ************ 2025-05-04 00:37:22.928905 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:37:22.930939 | orchestrator | 2025-05-04 00:37:22.931006 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-04 00:37:22.931036 | orchestrator | Sunday 04 May 2025 00:37:22 +0000 (0:00:00.730) 0:00:04.855 ************ 2025-05-04 00:37:23.046819 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:37:23.047103 | orchestrator | 2025-05-04 00:37:23.048354 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-04 00:37:23.049485 | orchestrator | 2025-05-04 00:37:23.051230 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-04 00:37:23.152444 | orchestrator | Sunday 04 May 2025 00:37:23 +0000 (0:00:00.125) 0:00:04.980 ************ 2025-05-04 00:37:23.152590 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:37:23.154651 | orchestrator | 2025-05-04 00:37:23.155902 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-04 00:37:23.155956 | orchestrator | Sunday 04 May 2025 00:37:23 +0000 (0:00:00.104) 0:00:05.085 ************ 2025-05-04 00:37:23.836821 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:37:23.836945 | orchestrator | 2025-05-04 00:37:23.837250 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-04 00:37:23.838792 | orchestrator | Sunday 04 May 2025 00:37:23 +0000 (0:00:00.683) 0:00:05.768 ************ 2025-05-04 00:37:23.882479 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:37:23.883137 | orchestrator | 2025-05-04 00:37:23.884140 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:37:23.885405 | orchestrator | 2025-05-04 00:37:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:37:23.886496 | orchestrator | 2025-05-04 00:37:23 | INFO  | Please wait and do not abort execution. 2025-05-04 00:37:23.886592 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:23.887517 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:23.887847 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:23.888473 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:23.888970 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:23.889734 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:37:23.889911 | orchestrator | 2025-05-04 00:37:23.890492 | orchestrator | Sunday 04 May 2025 00:37:23 +0000 (0:00:00.047) 0:00:05.816 ************ 2025-05-04 00:37:23.890904 | orchestrator | =============================================================================== 2025-05-04 00:37:23.891217 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.38s 2025-05-04 00:37:23.891787 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2025-05-04 00:37:23.892036 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2025-05-04 00:37:24.478590 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-04 00:37:26.127230 | orchestrator | 2025-05-04 00:37:26 | INFO  | Task af08fb73-a534-4cf4-8315-99cafda9e3b4 (wait-for-connection) was prepared for execution. 2025-05-04 00:37:29.209574 | orchestrator | 2025-05-04 00:37:26 | INFO  | It takes a moment until task af08fb73-a534-4cf4-8315-99cafda9e3b4 (wait-for-connection) has been started and output is visible here. 2025-05-04 00:37:29.209848 | orchestrator | 2025-05-04 00:37:29.209975 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-04 00:37:29.213036 | orchestrator | 2025-05-04 00:37:29.214115 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-04 00:37:29.214161 | orchestrator | Sunday 04 May 2025 00:37:29 +0000 (0:00:00.186) 0:00:00.186 ************ 2025-05-04 00:37:41.961747 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:37:41.962294 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:37:41.962337 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:37:41.962361 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:37:41.963246 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:37:41.964654 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:37:41.965325 | orchestrator | 2025-05-04 00:37:41.966072 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:37:41.966684 | orchestrator | 2025-05-04 00:37:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:37:41.967473 | orchestrator | 2025-05-04 00:37:41 | INFO  | Please wait and do not abort execution. 2025-05-04 00:37:41.967508 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:37:41.968081 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:37:41.968924 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:37:41.969340 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:37:41.970566 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:37:41.971060 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:37:41.971737 | orchestrator | 2025-05-04 00:37:41.972712 | orchestrator | Sunday 04 May 2025 00:37:41 +0000 (0:00:12.751) 0:00:12.937 ************ 2025-05-04 00:37:41.973048 | orchestrator | =============================================================================== 2025-05-04 00:37:41.973726 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.75s 2025-05-04 00:37:42.467587 | orchestrator | + osism apply hddtemp 2025-05-04 00:37:43.904466 | orchestrator | 2025-05-04 00:37:43 | INFO  | Task 9664ff69-f32b-44f4-b2c9-16b3ff4d7c84 (hddtemp) was prepared for execution. 2025-05-04 00:37:47.026108 | orchestrator | 2025-05-04 00:37:43 | INFO  | It takes a moment until task 9664ff69-f32b-44f4-b2c9-16b3ff4d7c84 (hddtemp) has been started and output is visible here. 2025-05-04 00:37:47.026259 | orchestrator | 2025-05-04 00:37:47.027369 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-04 00:37:47.027606 | orchestrator | 2025-05-04 00:37:47.031919 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-04 00:37:47.179007 | orchestrator | Sunday 04 May 2025 00:37:47 +0000 (0:00:00.192) 0:00:00.192 ************ 2025-05-04 00:37:47.179152 | orchestrator | ok: [testbed-manager] 2025-05-04 00:37:47.257432 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:37:47.330974 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:37:47.406403 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:37:47.481556 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:37:47.712870 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:37:47.713056 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:37:47.714002 | orchestrator | 2025-05-04 00:37:47.714893 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-04 00:37:47.717757 | orchestrator | Sunday 04 May 2025 00:37:47 +0000 (0:00:00.685) 0:00:00.877 ************ 2025-05-04 00:37:48.896877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:37:48.898884 | orchestrator | 2025-05-04 00:37:50.888521 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-04 00:37:50.888713 | orchestrator | Sunday 04 May 2025 00:37:48 +0000 (0:00:01.181) 0:00:02.059 ************ 2025-05-04 00:37:50.888752 | orchestrator | ok: [testbed-manager] 2025-05-04 00:37:50.891744 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:37:50.892716 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:37:50.892751 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:37:50.892773 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:37:50.893461 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:37:50.894395 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:37:50.895526 | orchestrator | 2025-05-04 00:37:50.896538 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-04 00:37:50.897096 | orchestrator | Sunday 04 May 2025 00:37:50 +0000 (0:00:01.994) 0:00:04.054 ************ 2025-05-04 00:37:51.460062 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:37:51.556833 | orchestrator | changed: [testbed-manager] 2025-05-04 00:37:52.076977 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:37:52.078952 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:37:52.080458 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:37:52.082369 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:37:52.083828 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:37:52.084786 | orchestrator | 2025-05-04 00:37:52.085471 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-04 00:37:52.086388 | orchestrator | Sunday 04 May 2025 00:37:52 +0000 (0:00:01.186) 0:00:05.240 ************ 2025-05-04 00:37:53.330764 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:37:53.331858 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:37:53.332557 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:37:53.333891 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:37:53.334991 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:37:53.335505 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:37:53.336731 | orchestrator | ok: [testbed-manager] 2025-05-04 00:37:53.337534 | orchestrator | 2025-05-04 00:37:53.338122 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-04 00:37:53.338702 | orchestrator | Sunday 04 May 2025 00:37:53 +0000 (0:00:01.253) 0:00:06.494 ************ 2025-05-04 00:37:53.591915 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:37:53.679549 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:37:53.773927 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:37:53.850221 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:37:53.960122 | orchestrator | changed: [testbed-manager] 2025-05-04 00:37:53.960748 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:37:53.960861 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:37:53.961709 | orchestrator | 2025-05-04 00:37:53.962187 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-04 00:37:53.963300 | orchestrator | Sunday 04 May 2025 00:37:53 +0000 (0:00:00.633) 0:00:07.127 ************ 2025-05-04 00:38:06.109351 | orchestrator | changed: [testbed-manager] 2025-05-04 00:38:06.109724 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:38:06.109762 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:38:06.109778 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:38:06.109792 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:38:06.109806 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:38:06.109821 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:38:06.109842 | orchestrator | 2025-05-04 00:38:06.110828 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-04 00:38:06.112392 | orchestrator | Sunday 04 May 2025 00:38:06 +0000 (0:00:12.138) 0:00:19.265 ************ 2025-05-04 00:38:07.297149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:38:07.297696 | orchestrator | 2025-05-04 00:38:07.299025 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-04 00:38:07.299612 | orchestrator | Sunday 04 May 2025 00:38:07 +0000 (0:00:01.193) 0:00:20.459 ************ 2025-05-04 00:38:09.134253 | orchestrator | changed: [testbed-manager] 2025-05-04 00:38:09.134832 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:38:09.135537 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:38:09.136739 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:38:09.137670 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:38:09.138453 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:38:09.139752 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:38:09.140438 | orchestrator | 2025-05-04 00:38:09.141078 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:38:09.141679 | orchestrator | 2025-05-04 00:38:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:38:09.142067 | orchestrator | 2025-05-04 00:38:09 | INFO  | Please wait and do not abort execution. 2025-05-04 00:38:09.143150 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:38:09.144101 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:09.145027 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:09.146101 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:09.146695 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:09.147292 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:09.147786 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:09.148256 | orchestrator | 2025-05-04 00:38:09.149444 | orchestrator | Sunday 04 May 2025 00:38:09 +0000 (0:00:01.840) 0:00:22.300 ************ 2025-05-04 00:38:09.150499 | orchestrator | =============================================================================== 2025-05-04 00:38:09.151392 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.14s 2025-05-04 00:38:09.151823 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.99s 2025-05-04 00:38:09.152502 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-05-04 00:38:09.152985 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2025-05-04 00:38:09.154121 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2025-05-04 00:38:09.154801 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-05-04 00:38:09.155486 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-05-04 00:38:09.156515 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-05-04 00:38:09.156846 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.63s 2025-05-04 00:38:09.737395 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-04 00:38:11.095229 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-04 00:38:11.096336 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-04 00:38:11.096375 | orchestrator | + local max_attempts=60 2025-05-04 00:38:11.096391 | orchestrator | + local name=ceph-ansible 2025-05-04 00:38:11.096406 | orchestrator | + local attempt_num=1 2025-05-04 00:38:11.096428 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-04 00:38:11.130624 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-04 00:38:11.130823 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-04 00:38:11.131536 | orchestrator | + local max_attempts=60 2025-05-04 00:38:11.155123 | orchestrator | + local name=kolla-ansible 2025-05-04 00:38:11.155189 | orchestrator | + local attempt_num=1 2025-05-04 00:38:11.155206 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-04 00:38:11.155233 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-04 00:38:11.155300 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-04 00:38:11.155319 | orchestrator | + local max_attempts=60 2025-05-04 00:38:11.155334 | orchestrator | + local name=osism-ansible 2025-05-04 00:38:11.155348 | orchestrator | + local attempt_num=1 2025-05-04 00:38:11.155366 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-04 00:38:11.182221 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-04 00:38:11.356025 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-04 00:38:11.356166 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-04 00:38:11.356204 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-04 00:38:11.495795 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-04 00:38:11.636598 | orchestrator | ARA in osism-ansible already disabled. 2025-05-04 00:38:11.787712 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-04 00:38:11.787981 | orchestrator | + osism apply gather-facts 2025-05-04 00:38:13.157724 | orchestrator | 2025-05-04 00:38:13 | INFO  | Task d6892974-f07c-49e3-8a76-cc88d1ac05ca (gather-facts) was prepared for execution. 2025-05-04 00:38:16.157606 | orchestrator | 2025-05-04 00:38:13 | INFO  | It takes a moment until task d6892974-f07c-49e3-8a76-cc88d1ac05ca (gather-facts) has been started and output is visible here. 2025-05-04 00:38:16.157831 | orchestrator | 2025-05-04 00:38:16.158347 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-04 00:38:16.158379 | orchestrator | 2025-05-04 00:38:16.158404 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-04 00:38:16.159950 | orchestrator | Sunday 04 May 2025 00:38:16 +0000 (0:00:00.132) 0:00:00.132 ************ 2025-05-04 00:38:20.852586 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:38:20.853222 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:38:20.854773 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:38:20.854964 | orchestrator | ok: [testbed-manager] 2025-05-04 00:38:20.855965 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:38:20.857199 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:38:20.858526 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:38:20.858881 | orchestrator | 2025-05-04 00:38:20.859438 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-04 00:38:20.859967 | orchestrator | 2025-05-04 00:38:20.860440 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-04 00:38:20.860888 | orchestrator | Sunday 04 May 2025 00:38:20 +0000 (0:00:04.698) 0:00:04.831 ************ 2025-05-04 00:38:20.985550 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:38:21.051016 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:38:21.118516 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:38:21.185707 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:38:21.255898 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:38:21.285974 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:38:21.286198 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:38:21.286452 | orchestrator | 2025-05-04 00:38:21.287323 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:38:21.287457 | orchestrator | 2025-05-04 00:38:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:38:21.287540 | orchestrator | 2025-05-04 00:38:21 | INFO  | Please wait and do not abort execution. 2025-05-04 00:38:21.287911 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.288227 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.288611 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.288816 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.289216 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.289479 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.289758 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 00:38:21.290003 | orchestrator | 2025-05-04 00:38:21.290296 | orchestrator | Sunday 04 May 2025 00:38:21 +0000 (0:00:00.436) 0:00:05.268 ************ 2025-05-04 00:38:21.290617 | orchestrator | =============================================================================== 2025-05-04 00:38:21.291152 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.70s 2025-05-04 00:38:21.291233 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-05-04 00:38:21.676563 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-04 00:38:21.685399 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-04 00:38:21.704539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-04 00:38:21.717102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-04 00:38:21.727759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-04 00:38:21.740183 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-04 00:38:21.760565 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-04 00:38:21.779469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-04 00:38:21.797581 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-04 00:38:21.811569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-04 00:38:21.831945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-04 00:38:21.847329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-04 00:38:21.869122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-04 00:38:21.883956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-04 00:38:21.903734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-04 00:38:21.922122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-04 00:38:21.940273 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-04 00:38:21.958786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-04 00:38:21.973170 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-04 00:38:21.988936 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-04 00:38:22.007618 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-04 00:38:22.107641 | orchestrator | changed 2025-05-04 00:38:22.159366 | 2025-05-04 00:38:22.159502 | TASK [Deploy services] 2025-05-04 00:38:22.266756 | orchestrator | skipping: Conditional result was False 2025-05-04 00:38:22.286779 | 2025-05-04 00:38:22.286978 | TASK [Deploy in a nutshell] 2025-05-04 00:38:23.078783 | orchestrator | + set -e 2025-05-04 00:38:23.078988 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-04 00:38:23.079014 | orchestrator | ++ export INTERACTIVE=false 2025-05-04 00:38:23.079031 | orchestrator | ++ INTERACTIVE=false 2025-05-04 00:38:23.079075 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-04 00:38:23.079093 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-04 00:38:23.079117 | orchestrator | + source /opt/manager-vars.sh 2025-05-04 00:38:23.079199 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-04 00:38:23.079228 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-04 00:38:23.079245 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-04 00:38:23.079259 | orchestrator | ++ CEPH_VERSION=reef 2025-05-04 00:38:23.079274 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-04 00:38:23.079288 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-04 00:38:23.079302 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-04 00:38:23.079316 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-04 00:38:23.079398 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-04 00:38:23.079427 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-04 00:38:23.079600 | orchestrator | ++ export ARA=false 2025-05-04 00:38:23.079621 | orchestrator | ++ ARA=false 2025-05-04 00:38:23.079636 | orchestrator | ++ export TEMPEST=false 2025-05-04 00:38:23.079673 | orchestrator | ++ TEMPEST=false 2025-05-04 00:38:23.079687 | orchestrator | ++ export IS_ZUUL=true 2025-05-04 00:38:23.079701 | orchestrator | ++ IS_ZUUL=true 2025-05-04 00:38:23.079715 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:38:23.079732 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-05-04 00:38:23.079751 | orchestrator | ++ export EXTERNAL_API=false 2025-05-04 00:38:23.079766 | orchestrator | ++ EXTERNAL_API=false 2025-05-04 00:38:23.079781 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-04 00:38:23.079799 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-04 00:38:23.079813 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-04 00:38:23.079828 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-04 00:38:23.079845 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-04 00:38:23.079899 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-04 00:38:23.079919 | orchestrator | + echo 2025-05-04 00:38:23.080451 | orchestrator | 2025-05-04 00:38:23.082121 | orchestrator | # PULL IMAGES 2025-05-04 00:38:23.082174 | orchestrator | 2025-05-04 00:38:23.082195 | orchestrator | + echo '# PULL IMAGES' 2025-05-04 00:38:23.082213 | orchestrator | + echo 2025-05-04 00:38:23.082237 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-04 00:38:23.147392 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-04 00:38:24.516520 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-04 00:38:24.516748 | orchestrator | 2025-05-04 00:38:24 | INFO  | Trying to run play pull-images in environment custom 2025-05-04 00:38:24.567822 | orchestrator | 2025-05-04 00:38:24 | INFO  | Task d6a22e2b-46d9-43ad-8f82-4aa21aec54a0 (pull-images) was prepared for execution. 2025-05-04 00:38:27.628487 | orchestrator | 2025-05-04 00:38:24 | INFO  | It takes a moment until task d6a22e2b-46d9-43ad-8f82-4aa21aec54a0 (pull-images) has been started and output is visible here. 2025-05-04 00:38:27.628884 | orchestrator | 2025-05-04 00:38:27.630717 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-04 00:38:27.630773 | orchestrator | 2025-05-04 00:38:27.630800 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-04 00:38:27.631480 | orchestrator | Sunday 04 May 2025 00:38:27 +0000 (0:00:00.139) 0:00:00.139 ************ 2025-05-04 00:39:07.425529 | orchestrator | changed: [testbed-manager] 2025-05-04 00:39:54.013266 | orchestrator | 2025-05-04 00:39:54.013444 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-04 00:39:54.013477 | orchestrator | Sunday 04 May 2025 00:39:07 +0000 (0:00:39.796) 0:00:39.936 ************ 2025-05-04 00:39:54.013511 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-04 00:39:54.013639 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-04 00:39:54.013902 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-04 00:39:54.014878 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-04 00:39:54.018348 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-04 00:39:54.021760 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-04 00:39:54.023975 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-04 00:39:54.024044 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-04 00:39:54.024095 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-04 00:39:54.024120 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-04 00:39:54.024189 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-04 00:39:54.024207 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-04 00:39:54.024222 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-04 00:39:54.024237 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-04 00:39:54.024257 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-04 00:39:54.024511 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-04 00:39:54.024849 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-04 00:39:54.025475 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-04 00:39:54.028594 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-04 00:39:54.028851 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-04 00:39:54.028878 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-04 00:39:54.028912 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-04 00:39:54.028929 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-04 00:39:54.028944 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-04 00:39:54.028958 | orchestrator | 2025-05-04 00:39:54.028979 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:39:54.029325 | orchestrator | 2025-05-04 00:39:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:39:54.029799 | orchestrator | 2025-05-04 00:39:54 | INFO  | Please wait and do not abort execution. 2025-05-04 00:39:54.029832 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:39:54.030137 | orchestrator | 2025-05-04 00:39:54.030612 | orchestrator | Sunday 04 May 2025 00:39:54 +0000 (0:00:46.590) 0:01:26.526 ************ 2025-05-04 00:39:54.030903 | orchestrator | =============================================================================== 2025-05-04 00:39:54.031207 | orchestrator | Pull other images ------------------------------------------------------ 46.59s 2025-05-04 00:39:54.031599 | orchestrator | Pull keystone image ---------------------------------------------------- 39.80s 2025-05-04 00:39:56.041379 | orchestrator | 2025-05-04 00:39:56 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-04 00:39:56.090167 | orchestrator | 2025-05-04 00:39:56 | INFO  | Task 5126112c-6af1-4577-8ed2-3fa555a0799e (wipe-partitions) was prepared for execution. 2025-05-04 00:39:59.350750 | orchestrator | 2025-05-04 00:39:56 | INFO  | It takes a moment until task 5126112c-6af1-4577-8ed2-3fa555a0799e (wipe-partitions) has been started and output is visible here. 2025-05-04 00:39:59.350912 | orchestrator | 2025-05-04 00:39:59.353211 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-04 00:39:59.353415 | orchestrator | 2025-05-04 00:39:59.353451 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-04 00:39:59.353803 | orchestrator | Sunday 04 May 2025 00:39:59 +0000 (0:00:00.127) 0:00:00.127 ************ 2025-05-04 00:39:59.928066 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:39:59.928246 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:39:59.928271 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:39:59.928287 | orchestrator | 2025-05-04 00:39:59.928304 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-04 00:39:59.928327 | orchestrator | Sunday 04 May 2025 00:39:59 +0000 (0:00:00.575) 0:00:00.702 ************ 2025-05-04 00:40:00.088888 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:00.206920 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:00.207356 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:00.207390 | orchestrator | 2025-05-04 00:40:00.207449 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-04 00:40:00.207652 | orchestrator | Sunday 04 May 2025 00:40:00 +0000 (0:00:00.279) 0:00:00.982 ************ 2025-05-04 00:40:00.966292 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:00.966799 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:00.966830 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:40:00.966989 | orchestrator | 2025-05-04 00:40:00.967341 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-04 00:40:00.967804 | orchestrator | Sunday 04 May 2025 00:40:00 +0000 (0:00:00.760) 0:00:01.742 ************ 2025-05-04 00:40:01.127331 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:01.243779 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:01.244045 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:01.244786 | orchestrator | 2025-05-04 00:40:01.245320 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-04 00:40:01.245910 | orchestrator | Sunday 04 May 2025 00:40:01 +0000 (0:00:00.276) 0:00:02.019 ************ 2025-05-04 00:40:02.456393 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-04 00:40:02.456819 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-04 00:40:02.456865 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-04 00:40:02.457183 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-04 00:40:02.457576 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-04 00:40:02.458235 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-04 00:40:02.458407 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-04 00:40:02.458571 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-04 00:40:02.459154 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-04 00:40:02.459625 | orchestrator | 2025-05-04 00:40:02.460558 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-04 00:40:02.461395 | orchestrator | Sunday 04 May 2025 00:40:02 +0000 (0:00:01.214) 0:00:03.234 ************ 2025-05-04 00:40:03.793153 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-04 00:40:03.794153 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-04 00:40:03.796375 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-04 00:40:03.796663 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-04 00:40:03.798916 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-04 00:40:03.803148 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-04 00:40:03.803657 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-04 00:40:03.804546 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-04 00:40:03.805234 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-04 00:40:03.816142 | orchestrator | 2025-05-04 00:40:03.816256 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-04 00:40:03.816875 | orchestrator | Sunday 04 May 2025 00:40:03 +0000 (0:00:01.336) 0:00:04.570 ************ 2025-05-04 00:40:06.141659 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-04 00:40:06.142719 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-04 00:40:06.143488 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-04 00:40:06.144131 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-04 00:40:06.152776 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-04 00:40:06.154083 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-04 00:40:06.154181 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-04 00:40:06.154201 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-04 00:40:06.154217 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-04 00:40:06.154232 | orchestrator | 2025-05-04 00:40:06.154247 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-04 00:40:06.154275 | orchestrator | Sunday 04 May 2025 00:40:06 +0000 (0:00:02.337) 0:00:06.907 ************ 2025-05-04 00:40:06.761854 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:40:06.762143 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:40:06.762542 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:40:06.762585 | orchestrator | 2025-05-04 00:40:06.764321 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-04 00:40:07.421838 | orchestrator | Sunday 04 May 2025 00:40:06 +0000 (0:00:00.632) 0:00:07.540 ************ 2025-05-04 00:40:07.421981 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:40:07.422105 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:40:07.422445 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:40:07.422968 | orchestrator | 2025-05-04 00:40:07.423132 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:40:07.423656 | orchestrator | 2025-05-04 00:40:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:40:07.426551 | orchestrator | 2025-05-04 00:40:07 | INFO  | Please wait and do not abort execution. 2025-05-04 00:40:07.428171 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:07.428624 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:07.429358 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:07.429861 | orchestrator | 2025-05-04 00:40:07.430314 | orchestrator | Sunday 04 May 2025 00:40:07 +0000 (0:00:00.660) 0:00:08.201 ************ 2025-05-04 00:40:07.430809 | orchestrator | =============================================================================== 2025-05-04 00:40:07.431266 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.34s 2025-05-04 00:40:07.431753 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-05-04 00:40:07.432241 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-05-04 00:40:07.432650 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.76s 2025-05-04 00:40:07.433199 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-05-04 00:40:07.433625 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-05-04 00:40:07.434126 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-05-04 00:40:07.434650 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-05-04 00:40:07.438139 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-05-04 00:40:09.470590 | orchestrator | 2025-05-04 00:40:09 | INFO  | Task eff54222-c7ca-4457-ba56-fd4ddba1c7ae (facts) was prepared for execution. 2025-05-04 00:40:12.801192 | orchestrator | 2025-05-04 00:40:09 | INFO  | It takes a moment until task eff54222-c7ca-4457-ba56-fd4ddba1c7ae (facts) has been started and output is visible here. 2025-05-04 00:40:12.801341 | orchestrator | 2025-05-04 00:40:12.801540 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-04 00:40:12.802870 | orchestrator | 2025-05-04 00:40:12.803054 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-04 00:40:12.803637 | orchestrator | Sunday 04 May 2025 00:40:12 +0000 (0:00:00.244) 0:00:00.244 ************ 2025-05-04 00:40:13.383223 | orchestrator | ok: [testbed-manager] 2025-05-04 00:40:13.894302 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:40:13.894791 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:40:13.895136 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:40:13.896911 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:13.897160 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:13.897185 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:40:13.897618 | orchestrator | 2025-05-04 00:40:13.897923 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-04 00:40:13.898504 | orchestrator | Sunday 04 May 2025 00:40:13 +0000 (0:00:01.093) 0:00:01.337 ************ 2025-05-04 00:40:14.045566 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:40:14.113881 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:40:14.181667 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:40:14.252959 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:40:14.320817 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:14.962783 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:14.966400 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:14.966739 | orchestrator | 2025-05-04 00:40:14.967728 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-04 00:40:14.968612 | orchestrator | 2025-05-04 00:40:14.969151 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-04 00:40:14.970544 | orchestrator | Sunday 04 May 2025 00:40:14 +0000 (0:00:01.069) 0:00:02.407 ************ 2025-05-04 00:40:19.490842 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:40:19.491209 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:40:19.491880 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:40:19.493238 | orchestrator | ok: [testbed-manager] 2025-05-04 00:40:19.493771 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:19.493801 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:19.496971 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:40:19.882628 | orchestrator | 2025-05-04 00:40:19.882820 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-04 00:40:19.882842 | orchestrator | 2025-05-04 00:40:19.882857 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-04 00:40:19.882872 | orchestrator | Sunday 04 May 2025 00:40:19 +0000 (0:00:04.528) 0:00:06.936 ************ 2025-05-04 00:40:19.882918 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:40:20.016590 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:40:20.104678 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:40:20.183222 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:40:20.254592 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:20.294640 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:20.294995 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:20.295052 | orchestrator | 2025-05-04 00:40:20.295781 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:40:20.296687 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.297198 | orchestrator | 2025-05-04 00:40:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:40:20.297360 | orchestrator | 2025-05-04 00:40:20 | INFO  | Please wait and do not abort execution. 2025-05-04 00:40:20.297382 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.297413 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.297487 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.297996 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.298346 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.298586 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:40:20.298935 | orchestrator | 2025-05-04 00:40:20.299494 | orchestrator | Sunday 04 May 2025 00:40:20 +0000 (0:00:00.803) 0:00:07.740 ************ 2025-05-04 00:40:20.299683 | orchestrator | =============================================================================== 2025-05-04 00:40:20.300903 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.53s 2025-05-04 00:40:22.729855 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-05-04 00:40:22.729983 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-05-04 00:40:22.730002 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.80s 2025-05-04 00:40:22.730093 | orchestrator | 2025-05-04 00:40:22 | INFO  | Task a0e4ea35-47de-4205-8f7e-12ac2e9c50fd (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-04 00:40:22.731338 | orchestrator | 2025-05-04 00:40:22 | INFO  | It takes a moment until task a0e4ea35-47de-4205-8f7e-12ac2e9c50fd (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-04 00:40:26.200649 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-04 00:40:26.756396 | orchestrator | 2025-05-04 00:40:26.756900 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-04 00:40:26.761346 | orchestrator | 2025-05-04 00:40:27.032297 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-04 00:40:27.032403 | orchestrator | Sunday 04 May 2025 00:40:26 +0000 (0:00:00.471) 0:00:00.471 ************ 2025-05-04 00:40:27.032436 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-04 00:40:27.032500 | orchestrator | 2025-05-04 00:40:27.032523 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-04 00:40:27.032667 | orchestrator | Sunday 04 May 2025 00:40:27 +0000 (0:00:00.280) 0:00:00.752 ************ 2025-05-04 00:40:27.235104 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:27.235590 | orchestrator | 2025-05-04 00:40:27.235626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:27.236085 | orchestrator | Sunday 04 May 2025 00:40:27 +0000 (0:00:00.202) 0:00:00.954 ************ 2025-05-04 00:40:27.670833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-04 00:40:27.671552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-04 00:40:27.671587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-04 00:40:27.671612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-04 00:40:27.674560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-04 00:40:27.674662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-04 00:40:27.674684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-04 00:40:27.674720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-04 00:40:27.674741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-04 00:40:27.675214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-04 00:40:27.675962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-04 00:40:27.676435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-04 00:40:27.677048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-04 00:40:27.677486 | orchestrator | 2025-05-04 00:40:27.677952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:27.678426 | orchestrator | Sunday 04 May 2025 00:40:27 +0000 (0:00:00.429) 0:00:01.383 ************ 2025-05-04 00:40:27.847941 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:27.849345 | orchestrator | 2025-05-04 00:40:27.851545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:27.852815 | orchestrator | Sunday 04 May 2025 00:40:27 +0000 (0:00:00.183) 0:00:01.567 ************ 2025-05-04 00:40:28.026228 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:28.026853 | orchestrator | 2025-05-04 00:40:28.026909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:28.027368 | orchestrator | Sunday 04 May 2025 00:40:28 +0000 (0:00:00.178) 0:00:01.746 ************ 2025-05-04 00:40:28.210160 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:28.210663 | orchestrator | 2025-05-04 00:40:28.210897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:28.211519 | orchestrator | Sunday 04 May 2025 00:40:28 +0000 (0:00:00.182) 0:00:01.928 ************ 2025-05-04 00:40:28.416347 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:28.416673 | orchestrator | 2025-05-04 00:40:28.416753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:28.416779 | orchestrator | Sunday 04 May 2025 00:40:28 +0000 (0:00:00.205) 0:00:02.133 ************ 2025-05-04 00:40:28.600646 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:28.602096 | orchestrator | 2025-05-04 00:40:28.602241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:28.602272 | orchestrator | Sunday 04 May 2025 00:40:28 +0000 (0:00:00.184) 0:00:02.318 ************ 2025-05-04 00:40:28.781930 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:28.782212 | orchestrator | 2025-05-04 00:40:28.782255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:28.782358 | orchestrator | Sunday 04 May 2025 00:40:28 +0000 (0:00:00.184) 0:00:02.502 ************ 2025-05-04 00:40:28.959609 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:28.959852 | orchestrator | 2025-05-04 00:40:28.961011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:29.146969 | orchestrator | Sunday 04 May 2025 00:40:28 +0000 (0:00:00.177) 0:00:02.679 ************ 2025-05-04 00:40:29.147087 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:29.147654 | orchestrator | 2025-05-04 00:40:29.148505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:29.152688 | orchestrator | Sunday 04 May 2025 00:40:29 +0000 (0:00:00.187) 0:00:02.867 ************ 2025-05-04 00:40:29.674987 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6) 2025-05-04 00:40:29.678957 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6) 2025-05-04 00:40:29.679435 | orchestrator | 2025-05-04 00:40:29.679997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:29.680658 | orchestrator | Sunday 04 May 2025 00:40:29 +0000 (0:00:00.526) 0:00:03.393 ************ 2025-05-04 00:40:30.391314 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7) 2025-05-04 00:40:30.394126 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7) 2025-05-04 00:40:30.394686 | orchestrator | 2025-05-04 00:40:30.395660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:30.396298 | orchestrator | Sunday 04 May 2025 00:40:30 +0000 (0:00:00.713) 0:00:04.107 ************ 2025-05-04 00:40:30.790377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93) 2025-05-04 00:40:30.791375 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93) 2025-05-04 00:40:30.791414 | orchestrator | 2025-05-04 00:40:30.791543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:30.793803 | orchestrator | Sunday 04 May 2025 00:40:30 +0000 (0:00:00.403) 0:00:04.511 ************ 2025-05-04 00:40:31.205113 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc) 2025-05-04 00:40:31.205383 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc) 2025-05-04 00:40:31.205836 | orchestrator | 2025-05-04 00:40:31.205871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:31.513257 | orchestrator | Sunday 04 May 2025 00:40:31 +0000 (0:00:00.412) 0:00:04.923 ************ 2025-05-04 00:40:31.513408 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-04 00:40:31.513473 | orchestrator | 2025-05-04 00:40:31.513492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:31.513511 | orchestrator | Sunday 04 May 2025 00:40:31 +0000 (0:00:00.307) 0:00:05.231 ************ 2025-05-04 00:40:31.917926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-04 00:40:31.918185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-04 00:40:31.918218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-04 00:40:31.918589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-04 00:40:31.919771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-04 00:40:31.920164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-04 00:40:31.920275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-04 00:40:31.920655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-04 00:40:31.921913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-04 00:40:31.922218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-04 00:40:31.923165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-04 00:40:31.924505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-04 00:40:31.925694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-04 00:40:31.926283 | orchestrator | 2025-05-04 00:40:31.926858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:31.927187 | orchestrator | Sunday 04 May 2025 00:40:31 +0000 (0:00:00.406) 0:00:05.638 ************ 2025-05-04 00:40:32.140063 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:32.140761 | orchestrator | 2025-05-04 00:40:32.140798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:32.140823 | orchestrator | Sunday 04 May 2025 00:40:32 +0000 (0:00:00.218) 0:00:05.857 ************ 2025-05-04 00:40:32.350960 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:32.351204 | orchestrator | 2025-05-04 00:40:32.352746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:32.353150 | orchestrator | Sunday 04 May 2025 00:40:32 +0000 (0:00:00.212) 0:00:06.069 ************ 2025-05-04 00:40:32.532161 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:32.532345 | orchestrator | 2025-05-04 00:40:32.532377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:32.723938 | orchestrator | Sunday 04 May 2025 00:40:32 +0000 (0:00:00.183) 0:00:06.252 ************ 2025-05-04 00:40:32.724065 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:32.725523 | orchestrator | 2025-05-04 00:40:32.725634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:32.726099 | orchestrator | Sunday 04 May 2025 00:40:32 +0000 (0:00:00.189) 0:00:06.442 ************ 2025-05-04 00:40:32.934749 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:32.935642 | orchestrator | 2025-05-04 00:40:32.935817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:32.936066 | orchestrator | Sunday 04 May 2025 00:40:32 +0000 (0:00:00.211) 0:00:06.653 ************ 2025-05-04 00:40:33.580551 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:33.580757 | orchestrator | 2025-05-04 00:40:33.580785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:33.580808 | orchestrator | Sunday 04 May 2025 00:40:33 +0000 (0:00:00.647) 0:00:07.300 ************ 2025-05-04 00:40:33.755169 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:33.756294 | orchestrator | 2025-05-04 00:40:33.756609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:33.758260 | orchestrator | Sunday 04 May 2025 00:40:33 +0000 (0:00:00.174) 0:00:07.475 ************ 2025-05-04 00:40:33.979933 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:33.982885 | orchestrator | 2025-05-04 00:40:33.983227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:33.983254 | orchestrator | Sunday 04 May 2025 00:40:33 +0000 (0:00:00.224) 0:00:07.700 ************ 2025-05-04 00:40:34.613875 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-04 00:40:34.614005 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-04 00:40:34.614184 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-04 00:40:34.614345 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-04 00:40:34.614801 | orchestrator | 2025-05-04 00:40:34.614919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:34.615152 | orchestrator | Sunday 04 May 2025 00:40:34 +0000 (0:00:00.631) 0:00:08.331 ************ 2025-05-04 00:40:34.814694 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:34.816996 | orchestrator | 2025-05-04 00:40:35.040615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:35.040781 | orchestrator | Sunday 04 May 2025 00:40:34 +0000 (0:00:00.203) 0:00:08.535 ************ 2025-05-04 00:40:35.040818 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:35.041192 | orchestrator | 2025-05-04 00:40:35.041230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:35.254803 | orchestrator | Sunday 04 May 2025 00:40:35 +0000 (0:00:00.226) 0:00:08.761 ************ 2025-05-04 00:40:35.254926 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:35.255849 | orchestrator | 2025-05-04 00:40:35.255883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:35.256262 | orchestrator | Sunday 04 May 2025 00:40:35 +0000 (0:00:00.213) 0:00:08.975 ************ 2025-05-04 00:40:35.448233 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:35.448830 | orchestrator | 2025-05-04 00:40:35.448995 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-04 00:40:35.449526 | orchestrator | Sunday 04 May 2025 00:40:35 +0000 (0:00:00.191) 0:00:09.167 ************ 2025-05-04 00:40:35.596201 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-04 00:40:35.598588 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-04 00:40:35.722599 | orchestrator | 2025-05-04 00:40:35.722700 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-04 00:40:35.722774 | orchestrator | Sunday 04 May 2025 00:40:35 +0000 (0:00:00.147) 0:00:09.314 ************ 2025-05-04 00:40:35.722804 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:35.723748 | orchestrator | 2025-05-04 00:40:35.726497 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-04 00:40:35.992653 | orchestrator | Sunday 04 May 2025 00:40:35 +0000 (0:00:00.128) 0:00:09.442 ************ 2025-05-04 00:40:35.992801 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:35.993566 | orchestrator | 2025-05-04 00:40:35.993674 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-04 00:40:35.994311 | orchestrator | Sunday 04 May 2025 00:40:35 +0000 (0:00:00.269) 0:00:09.712 ************ 2025-05-04 00:40:36.116057 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:36.117770 | orchestrator | 2025-05-04 00:40:36.118373 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-04 00:40:36.118404 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.121) 0:00:09.833 ************ 2025-05-04 00:40:36.234083 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:36.234938 | orchestrator | 2025-05-04 00:40:36.399782 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-04 00:40:36.399919 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.120) 0:00:09.953 ************ 2025-05-04 00:40:36.399952 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c91b3cb6-7edb-5452-ada6-d38ce882942b'}}) 2025-05-04 00:40:36.400023 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}}) 2025-05-04 00:40:36.400956 | orchestrator | 2025-05-04 00:40:36.401401 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-04 00:40:36.401982 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.166) 0:00:10.120 ************ 2025-05-04 00:40:36.538003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c91b3cb6-7edb-5452-ada6-d38ce882942b'}})  2025-05-04 00:40:36.538835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}})  2025-05-04 00:40:36.538877 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:36.541816 | orchestrator | 2025-05-04 00:40:36.705293 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-04 00:40:36.705423 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.136) 0:00:10.256 ************ 2025-05-04 00:40:36.705456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c91b3cb6-7edb-5452-ada6-d38ce882942b'}})  2025-05-04 00:40:36.707106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}})  2025-05-04 00:40:36.708490 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:36.708741 | orchestrator | 2025-05-04 00:40:36.708769 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-04 00:40:36.708789 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.165) 0:00:10.422 ************ 2025-05-04 00:40:36.863882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c91b3cb6-7edb-5452-ada6-d38ce882942b'}})  2025-05-04 00:40:36.864081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}})  2025-05-04 00:40:36.864220 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:36.864413 | orchestrator | 2025-05-04 00:40:36.867910 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-04 00:40:36.984794 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.157) 0:00:10.580 ************ 2025-05-04 00:40:36.984904 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:36.985805 | orchestrator | 2025-05-04 00:40:36.987797 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-04 00:40:36.988221 | orchestrator | Sunday 04 May 2025 00:40:36 +0000 (0:00:00.122) 0:00:10.703 ************ 2025-05-04 00:40:37.127248 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:40:37.128856 | orchestrator | 2025-05-04 00:40:37.129349 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-04 00:40:37.129776 | orchestrator | Sunday 04 May 2025 00:40:37 +0000 (0:00:00.140) 0:00:10.844 ************ 2025-05-04 00:40:37.263662 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:37.265556 | orchestrator | 2025-05-04 00:40:37.265953 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-04 00:40:37.266776 | orchestrator | Sunday 04 May 2025 00:40:37 +0000 (0:00:00.137) 0:00:10.981 ************ 2025-05-04 00:40:37.393001 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:37.393112 | orchestrator | 2025-05-04 00:40:37.393122 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-04 00:40:37.393131 | orchestrator | Sunday 04 May 2025 00:40:37 +0000 (0:00:00.130) 0:00:11.112 ************ 2025-05-04 00:40:37.512944 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:37.513824 | orchestrator | 2025-05-04 00:40:37.514498 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-04 00:40:37.516495 | orchestrator | Sunday 04 May 2025 00:40:37 +0000 (0:00:00.119) 0:00:11.231 ************ 2025-05-04 00:40:37.803205 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 00:40:37.804672 | orchestrator |  "ceph_osd_devices": { 2025-05-04 00:40:37.804771 | orchestrator |  "sdb": { 2025-05-04 00:40:37.807608 | orchestrator |  "osd_lvm_uuid": "c91b3cb6-7edb-5452-ada6-d38ce882942b" 2025-05-04 00:40:37.808166 | orchestrator |  }, 2025-05-04 00:40:37.808989 | orchestrator |  "sdc": { 2025-05-04 00:40:37.809668 | orchestrator |  "osd_lvm_uuid": "bdbd5a24-b46a-5ddb-91ef-7688b352f27d" 2025-05-04 00:40:37.810376 | orchestrator |  } 2025-05-04 00:40:37.811230 | orchestrator |  } 2025-05-04 00:40:37.812271 | orchestrator | } 2025-05-04 00:40:37.812893 | orchestrator | 2025-05-04 00:40:37.813545 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-04 00:40:37.816160 | orchestrator | Sunday 04 May 2025 00:40:37 +0000 (0:00:00.289) 0:00:11.521 ************ 2025-05-04 00:40:37.930513 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:37.932199 | orchestrator | 2025-05-04 00:40:37.935147 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-04 00:40:37.935264 | orchestrator | Sunday 04 May 2025 00:40:37 +0000 (0:00:00.129) 0:00:11.650 ************ 2025-05-04 00:40:38.063412 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:38.065747 | orchestrator | 2025-05-04 00:40:38.068140 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-04 00:40:38.174770 | orchestrator | Sunday 04 May 2025 00:40:38 +0000 (0:00:00.130) 0:00:11.781 ************ 2025-05-04 00:40:38.174883 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:40:38.175328 | orchestrator | 2025-05-04 00:40:38.175362 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-04 00:40:38.175548 | orchestrator | Sunday 04 May 2025 00:40:38 +0000 (0:00:00.114) 0:00:11.895 ************ 2025-05-04 00:40:38.423459 | orchestrator | changed: [testbed-node-3] => { 2025-05-04 00:40:38.423636 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-04 00:40:38.425038 | orchestrator |  "ceph_osd_devices": { 2025-05-04 00:40:38.428844 | orchestrator |  "sdb": { 2025-05-04 00:40:38.429380 | orchestrator |  "osd_lvm_uuid": "c91b3cb6-7edb-5452-ada6-d38ce882942b" 2025-05-04 00:40:38.431392 | orchestrator |  }, 2025-05-04 00:40:38.432159 | orchestrator |  "sdc": { 2025-05-04 00:40:38.432971 | orchestrator |  "osd_lvm_uuid": "bdbd5a24-b46a-5ddb-91ef-7688b352f27d" 2025-05-04 00:40:38.433645 | orchestrator |  } 2025-05-04 00:40:38.434193 | orchestrator |  }, 2025-05-04 00:40:38.437170 | orchestrator |  "lvm_volumes": [ 2025-05-04 00:40:38.437440 | orchestrator |  { 2025-05-04 00:40:38.439910 | orchestrator |  "data": "osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b", 2025-05-04 00:40:38.440146 | orchestrator |  "data_vg": "ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b" 2025-05-04 00:40:38.440626 | orchestrator |  }, 2025-05-04 00:40:38.440876 | orchestrator |  { 2025-05-04 00:40:38.441287 | orchestrator |  "data": "osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d", 2025-05-04 00:40:38.441656 | orchestrator |  "data_vg": "ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d" 2025-05-04 00:40:38.442063 | orchestrator |  } 2025-05-04 00:40:38.442361 | orchestrator |  ] 2025-05-04 00:40:38.442696 | orchestrator |  } 2025-05-04 00:40:38.442824 | orchestrator | } 2025-05-04 00:40:38.443555 | orchestrator | 2025-05-04 00:40:38.443660 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-04 00:40:38.444223 | orchestrator | Sunday 04 May 2025 00:40:38 +0000 (0:00:00.243) 0:00:12.139 ************ 2025-05-04 00:40:40.470190 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-04 00:40:40.474897 | orchestrator | 2025-05-04 00:40:40.475165 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-04 00:40:40.475356 | orchestrator | 2025-05-04 00:40:40.475612 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-04 00:40:40.475809 | orchestrator | Sunday 04 May 2025 00:40:40 +0000 (0:00:02.045) 0:00:14.184 ************ 2025-05-04 00:40:40.735157 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-04 00:40:40.735543 | orchestrator | 2025-05-04 00:40:40.735923 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-04 00:40:40.736508 | orchestrator | Sunday 04 May 2025 00:40:40 +0000 (0:00:00.270) 0:00:14.455 ************ 2025-05-04 00:40:40.993027 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:40.993580 | orchestrator | 2025-05-04 00:40:40.995482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:40.997258 | orchestrator | Sunday 04 May 2025 00:40:40 +0000 (0:00:00.257) 0:00:14.713 ************ 2025-05-04 00:40:41.455351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-04 00:40:41.456510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-04 00:40:41.457553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-04 00:40:41.458255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-04 00:40:41.458891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-04 00:40:41.459607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-04 00:40:41.459951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-04 00:40:41.461160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-04 00:40:41.461607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-04 00:40:41.462183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-04 00:40:41.464067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-04 00:40:41.464640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-04 00:40:41.464686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-04 00:40:41.465230 | orchestrator | 2025-05-04 00:40:41.465591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:41.466146 | orchestrator | Sunday 04 May 2025 00:40:41 +0000 (0:00:00.459) 0:00:15.172 ************ 2025-05-04 00:40:41.666220 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:41.666876 | orchestrator | 2025-05-04 00:40:41.872441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:41.872598 | orchestrator | Sunday 04 May 2025 00:40:41 +0000 (0:00:00.212) 0:00:15.385 ************ 2025-05-04 00:40:41.872632 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:41.873731 | orchestrator | 2025-05-04 00:40:41.875469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:41.876829 | orchestrator | Sunday 04 May 2025 00:40:41 +0000 (0:00:00.204) 0:00:15.590 ************ 2025-05-04 00:40:42.073777 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:42.074909 | orchestrator | 2025-05-04 00:40:42.247664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:42.247794 | orchestrator | Sunday 04 May 2025 00:40:42 +0000 (0:00:00.202) 0:00:15.792 ************ 2025-05-04 00:40:42.247817 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:42.248040 | orchestrator | 2025-05-04 00:40:42.248260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:42.248917 | orchestrator | Sunday 04 May 2025 00:40:42 +0000 (0:00:00.174) 0:00:15.967 ************ 2025-05-04 00:40:42.776951 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:42.777159 | orchestrator | 2025-05-04 00:40:42.777274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:42.777601 | orchestrator | Sunday 04 May 2025 00:40:42 +0000 (0:00:00.528) 0:00:16.495 ************ 2025-05-04 00:40:42.962211 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:42.962420 | orchestrator | 2025-05-04 00:40:42.962475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:42.964651 | orchestrator | Sunday 04 May 2025 00:40:42 +0000 (0:00:00.183) 0:00:16.679 ************ 2025-05-04 00:40:43.145815 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:43.145966 | orchestrator | 2025-05-04 00:40:43.145995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:43.146703 | orchestrator | Sunday 04 May 2025 00:40:43 +0000 (0:00:00.187) 0:00:16.866 ************ 2025-05-04 00:40:43.345545 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:43.345800 | orchestrator | 2025-05-04 00:40:43.346975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:43.347560 | orchestrator | Sunday 04 May 2025 00:40:43 +0000 (0:00:00.199) 0:00:17.066 ************ 2025-05-04 00:40:43.742704 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47) 2025-05-04 00:40:44.135340 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47) 2025-05-04 00:40:44.135420 | orchestrator | 2025-05-04 00:40:44.135432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:44.135483 | orchestrator | Sunday 04 May 2025 00:40:43 +0000 (0:00:00.395) 0:00:17.461 ************ 2025-05-04 00:40:44.135504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef) 2025-05-04 00:40:44.136619 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef) 2025-05-04 00:40:44.137471 | orchestrator | 2025-05-04 00:40:44.137861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:44.140032 | orchestrator | Sunday 04 May 2025 00:40:44 +0000 (0:00:00.393) 0:00:17.855 ************ 2025-05-04 00:40:44.564293 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254) 2025-05-04 00:40:44.565431 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254) 2025-05-04 00:40:44.568228 | orchestrator | 2025-05-04 00:40:44.568613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:44.569940 | orchestrator | Sunday 04 May 2025 00:40:44 +0000 (0:00:00.425) 0:00:18.280 ************ 2025-05-04 00:40:44.986082 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f) 2025-05-04 00:40:44.986469 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f) 2025-05-04 00:40:44.987774 | orchestrator | 2025-05-04 00:40:44.988530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:44.989980 | orchestrator | Sunday 04 May 2025 00:40:44 +0000 (0:00:00.423) 0:00:18.703 ************ 2025-05-04 00:40:45.334663 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-04 00:40:45.335072 | orchestrator | 2025-05-04 00:40:45.336832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:45.337880 | orchestrator | Sunday 04 May 2025 00:40:45 +0000 (0:00:00.348) 0:00:19.052 ************ 2025-05-04 00:40:45.973244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-04 00:40:45.973815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-04 00:40:45.973855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-04 00:40:45.973881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-04 00:40:45.973986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-04 00:40:45.974441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-04 00:40:45.975192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-04 00:40:45.975297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-04 00:40:45.975501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-04 00:40:45.978890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-04 00:40:45.979496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-04 00:40:45.979527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-04 00:40:45.979767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-04 00:40:45.980052 | orchestrator | 2025-05-04 00:40:45.980742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:46.182593 | orchestrator | Sunday 04 May 2025 00:40:45 +0000 (0:00:00.637) 0:00:19.690 ************ 2025-05-04 00:40:46.182785 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:46.182889 | orchestrator | 2025-05-04 00:40:46.182911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:46.182930 | orchestrator | Sunday 04 May 2025 00:40:46 +0000 (0:00:00.207) 0:00:19.898 ************ 2025-05-04 00:40:46.395811 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:46.396031 | orchestrator | 2025-05-04 00:40:46.396519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:46.397158 | orchestrator | Sunday 04 May 2025 00:40:46 +0000 (0:00:00.214) 0:00:20.113 ************ 2025-05-04 00:40:46.600759 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:46.601442 | orchestrator | 2025-05-04 00:40:46.605278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:46.610322 | orchestrator | Sunday 04 May 2025 00:40:46 +0000 (0:00:00.205) 0:00:20.318 ************ 2025-05-04 00:40:46.842787 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:46.842976 | orchestrator | 2025-05-04 00:40:46.843305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:46.843646 | orchestrator | Sunday 04 May 2025 00:40:46 +0000 (0:00:00.226) 0:00:20.545 ************ 2025-05-04 00:40:47.070260 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:47.071757 | orchestrator | 2025-05-04 00:40:47.071812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:47.072521 | orchestrator | Sunday 04 May 2025 00:40:47 +0000 (0:00:00.242) 0:00:20.787 ************ 2025-05-04 00:40:47.287188 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:47.287375 | orchestrator | 2025-05-04 00:40:47.288180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:47.288904 | orchestrator | Sunday 04 May 2025 00:40:47 +0000 (0:00:00.214) 0:00:21.002 ************ 2025-05-04 00:40:47.488061 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:47.738601 | orchestrator | 2025-05-04 00:40:47.738789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:47.738832 | orchestrator | Sunday 04 May 2025 00:40:47 +0000 (0:00:00.203) 0:00:21.205 ************ 2025-05-04 00:40:47.738865 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:47.741686 | orchestrator | 2025-05-04 00:40:47.741777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:47.742418 | orchestrator | Sunday 04 May 2025 00:40:47 +0000 (0:00:00.248) 0:00:21.453 ************ 2025-05-04 00:40:48.637993 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-04 00:40:48.638659 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-04 00:40:48.641258 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-04 00:40:48.641789 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-04 00:40:48.643862 | orchestrator | 2025-05-04 00:40:48.644879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:48.645755 | orchestrator | Sunday 04 May 2025 00:40:48 +0000 (0:00:00.901) 0:00:22.355 ************ 2025-05-04 00:40:48.835758 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:48.836168 | orchestrator | 2025-05-04 00:40:48.836853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:48.837518 | orchestrator | Sunday 04 May 2025 00:40:48 +0000 (0:00:00.199) 0:00:22.554 ************ 2025-05-04 00:40:49.296780 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:49.297831 | orchestrator | 2025-05-04 00:40:49.298963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:49.300263 | orchestrator | Sunday 04 May 2025 00:40:49 +0000 (0:00:00.460) 0:00:23.015 ************ 2025-05-04 00:40:49.509495 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:49.510741 | orchestrator | 2025-05-04 00:40:49.510869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:40:49.511360 | orchestrator | Sunday 04 May 2025 00:40:49 +0000 (0:00:00.210) 0:00:23.225 ************ 2025-05-04 00:40:49.710215 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:49.711224 | orchestrator | 2025-05-04 00:40:49.714486 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-04 00:40:49.894670 | orchestrator | Sunday 04 May 2025 00:40:49 +0000 (0:00:00.201) 0:00:23.427 ************ 2025-05-04 00:40:49.894862 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-04 00:40:49.894945 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-04 00:40:49.894965 | orchestrator | 2025-05-04 00:40:49.894985 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-04 00:40:49.896506 | orchestrator | Sunday 04 May 2025 00:40:49 +0000 (0:00:00.183) 0:00:23.611 ************ 2025-05-04 00:40:50.035150 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:50.035361 | orchestrator | 2025-05-04 00:40:50.037382 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-04 00:40:50.038260 | orchestrator | Sunday 04 May 2025 00:40:50 +0000 (0:00:00.142) 0:00:23.753 ************ 2025-05-04 00:40:50.200376 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:50.201089 | orchestrator | 2025-05-04 00:40:50.201884 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-04 00:40:50.202587 | orchestrator | Sunday 04 May 2025 00:40:50 +0000 (0:00:00.166) 0:00:23.919 ************ 2025-05-04 00:40:50.336561 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:50.337876 | orchestrator | 2025-05-04 00:40:50.338185 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-04 00:40:50.340391 | orchestrator | Sunday 04 May 2025 00:40:50 +0000 (0:00:00.135) 0:00:24.055 ************ 2025-05-04 00:40:50.480597 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:50.480856 | orchestrator | 2025-05-04 00:40:50.481619 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-04 00:40:50.484187 | orchestrator | Sunday 04 May 2025 00:40:50 +0000 (0:00:00.144) 0:00:24.199 ************ 2025-05-04 00:40:50.666565 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}}) 2025-05-04 00:40:50.666846 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5e087d3a-1c7d-5e62-b576-6c121f884fde'}}) 2025-05-04 00:40:50.667629 | orchestrator | 2025-05-04 00:40:50.668128 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-04 00:40:50.670165 | orchestrator | Sunday 04 May 2025 00:40:50 +0000 (0:00:00.185) 0:00:24.385 ************ 2025-05-04 00:40:50.860654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}})  2025-05-04 00:40:50.860904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5e087d3a-1c7d-5e62-b576-6c121f884fde'}})  2025-05-04 00:40:50.860930 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:50.860946 | orchestrator | 2025-05-04 00:40:50.860968 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-04 00:40:50.861879 | orchestrator | Sunday 04 May 2025 00:40:50 +0000 (0:00:00.190) 0:00:24.576 ************ 2025-05-04 00:40:51.040544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}})  2025-05-04 00:40:51.041759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5e087d3a-1c7d-5e62-b576-6c121f884fde'}})  2025-05-04 00:40:51.041803 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:51.042896 | orchestrator | 2025-05-04 00:40:51.043804 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-04 00:40:51.047083 | orchestrator | Sunday 04 May 2025 00:40:51 +0000 (0:00:00.182) 0:00:24.759 ************ 2025-05-04 00:40:51.454512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}})  2025-05-04 00:40:51.455581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5e087d3a-1c7d-5e62-b576-6c121f884fde'}})  2025-05-04 00:40:51.455983 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:51.458444 | orchestrator | 2025-05-04 00:40:51.459080 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-04 00:40:51.462212 | orchestrator | Sunday 04 May 2025 00:40:51 +0000 (0:00:00.413) 0:00:25.172 ************ 2025-05-04 00:40:51.599185 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:51.600264 | orchestrator | 2025-05-04 00:40:51.600318 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-04 00:40:51.601242 | orchestrator | Sunday 04 May 2025 00:40:51 +0000 (0:00:00.145) 0:00:25.318 ************ 2025-05-04 00:40:51.753815 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:40:51.754680 | orchestrator | 2025-05-04 00:40:51.756369 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-04 00:40:51.757390 | orchestrator | Sunday 04 May 2025 00:40:51 +0000 (0:00:00.152) 0:00:25.471 ************ 2025-05-04 00:40:51.929543 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:51.933298 | orchestrator | 2025-05-04 00:40:51.934107 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-04 00:40:51.935959 | orchestrator | Sunday 04 May 2025 00:40:51 +0000 (0:00:00.177) 0:00:25.648 ************ 2025-05-04 00:40:52.093468 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:52.093821 | orchestrator | 2025-05-04 00:40:52.094120 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-04 00:40:52.094996 | orchestrator | Sunday 04 May 2025 00:40:52 +0000 (0:00:00.163) 0:00:25.812 ************ 2025-05-04 00:40:52.241183 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:52.241369 | orchestrator | 2025-05-04 00:40:52.408787 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-04 00:40:52.408917 | orchestrator | Sunday 04 May 2025 00:40:52 +0000 (0:00:00.145) 0:00:25.958 ************ 2025-05-04 00:40:52.408952 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 00:40:52.409055 | orchestrator |  "ceph_osd_devices": { 2025-05-04 00:40:52.410289 | orchestrator |  "sdb": { 2025-05-04 00:40:52.410741 | orchestrator |  "osd_lvm_uuid": "03a186d7-e7a2-5e82-b5c3-d5631de29e6f" 2025-05-04 00:40:52.411748 | orchestrator |  }, 2025-05-04 00:40:52.412756 | orchestrator |  "sdc": { 2025-05-04 00:40:52.413348 | orchestrator |  "osd_lvm_uuid": "5e087d3a-1c7d-5e62-b576-6c121f884fde" 2025-05-04 00:40:52.414612 | orchestrator |  } 2025-05-04 00:40:52.415187 | orchestrator |  } 2025-05-04 00:40:52.415885 | orchestrator | } 2025-05-04 00:40:52.417528 | orchestrator | 2025-05-04 00:40:52.417821 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-04 00:40:52.417854 | orchestrator | Sunday 04 May 2025 00:40:52 +0000 (0:00:00.168) 0:00:26.126 ************ 2025-05-04 00:40:52.593774 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:52.594013 | orchestrator | 2025-05-04 00:40:52.595912 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-04 00:40:52.597200 | orchestrator | Sunday 04 May 2025 00:40:52 +0000 (0:00:00.183) 0:00:26.310 ************ 2025-05-04 00:40:52.750908 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:52.751589 | orchestrator | 2025-05-04 00:40:52.752475 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-04 00:40:52.753157 | orchestrator | Sunday 04 May 2025 00:40:52 +0000 (0:00:00.155) 0:00:26.466 ************ 2025-05-04 00:40:52.913368 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:40:52.913584 | orchestrator | 2025-05-04 00:40:52.913617 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-04 00:40:52.914586 | orchestrator | Sunday 04 May 2025 00:40:52 +0000 (0:00:00.164) 0:00:26.631 ************ 2025-05-04 00:40:53.431335 | orchestrator | changed: [testbed-node-4] => { 2025-05-04 00:40:53.431839 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-04 00:40:53.432644 | orchestrator |  "ceph_osd_devices": { 2025-05-04 00:40:53.434120 | orchestrator |  "sdb": { 2025-05-04 00:40:53.434596 | orchestrator |  "osd_lvm_uuid": "03a186d7-e7a2-5e82-b5c3-d5631de29e6f" 2025-05-04 00:40:53.435377 | orchestrator |  }, 2025-05-04 00:40:53.436433 | orchestrator |  "sdc": { 2025-05-04 00:40:53.437173 | orchestrator |  "osd_lvm_uuid": "5e087d3a-1c7d-5e62-b576-6c121f884fde" 2025-05-04 00:40:53.438104 | orchestrator |  } 2025-05-04 00:40:53.438747 | orchestrator |  }, 2025-05-04 00:40:53.439845 | orchestrator |  "lvm_volumes": [ 2025-05-04 00:40:53.441259 | orchestrator |  { 2025-05-04 00:40:53.441640 | orchestrator |  "data": "osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f", 2025-05-04 00:40:53.442644 | orchestrator |  "data_vg": "ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f" 2025-05-04 00:40:53.443993 | orchestrator |  }, 2025-05-04 00:40:53.444411 | orchestrator |  { 2025-05-04 00:40:53.445317 | orchestrator |  "data": "osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde", 2025-05-04 00:40:53.445898 | orchestrator |  "data_vg": "ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde" 2025-05-04 00:40:53.446611 | orchestrator |  } 2025-05-04 00:40:53.447287 | orchestrator |  ] 2025-05-04 00:40:53.448063 | orchestrator |  } 2025-05-04 00:40:53.448350 | orchestrator | } 2025-05-04 00:40:53.448835 | orchestrator | 2025-05-04 00:40:53.449400 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-04 00:40:53.449633 | orchestrator | Sunday 04 May 2025 00:40:53 +0000 (0:00:00.515) 0:00:27.146 ************ 2025-05-04 00:40:54.853070 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-04 00:40:54.854424 | orchestrator | 2025-05-04 00:40:54.856256 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-04 00:40:54.856300 | orchestrator | 2025-05-04 00:40:54.857151 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-04 00:40:54.857189 | orchestrator | Sunday 04 May 2025 00:40:54 +0000 (0:00:01.423) 0:00:28.570 ************ 2025-05-04 00:40:55.104978 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-04 00:40:55.105939 | orchestrator | 2025-05-04 00:40:55.106336 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-04 00:40:55.107336 | orchestrator | Sunday 04 May 2025 00:40:55 +0000 (0:00:00.253) 0:00:28.823 ************ 2025-05-04 00:40:55.361183 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:40:55.361790 | orchestrator | 2025-05-04 00:40:55.362813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:55.364088 | orchestrator | Sunday 04 May 2025 00:40:55 +0000 (0:00:00.254) 0:00:29.078 ************ 2025-05-04 00:40:56.152393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-04 00:40:56.154204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-04 00:40:56.154293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-04 00:40:56.155826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-04 00:40:56.156672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-04 00:40:56.157500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-04 00:40:56.158248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-04 00:40:56.163784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-04 00:40:56.163878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-04 00:40:56.163896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-04 00:40:56.163923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-04 00:40:56.163948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-04 00:40:56.163976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-04 00:40:56.164001 | orchestrator | 2025-05-04 00:40:56.164021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:56.164146 | orchestrator | Sunday 04 May 2025 00:40:56 +0000 (0:00:00.792) 0:00:29.871 ************ 2025-05-04 00:40:56.359007 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:56.361458 | orchestrator | 2025-05-04 00:40:56.363397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:56.364073 | orchestrator | Sunday 04 May 2025 00:40:56 +0000 (0:00:00.206) 0:00:30.078 ************ 2025-05-04 00:40:56.567043 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:56.568140 | orchestrator | 2025-05-04 00:40:56.569632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:56.570702 | orchestrator | Sunday 04 May 2025 00:40:56 +0000 (0:00:00.208) 0:00:30.286 ************ 2025-05-04 00:40:56.788829 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:56.789238 | orchestrator | 2025-05-04 00:40:56.789822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:56.790198 | orchestrator | Sunday 04 May 2025 00:40:56 +0000 (0:00:00.221) 0:00:30.508 ************ 2025-05-04 00:40:57.010850 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:57.012136 | orchestrator | 2025-05-04 00:40:57.012702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:57.013423 | orchestrator | Sunday 04 May 2025 00:40:57 +0000 (0:00:00.220) 0:00:30.728 ************ 2025-05-04 00:40:57.228180 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:57.230257 | orchestrator | 2025-05-04 00:40:57.230701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:57.231245 | orchestrator | Sunday 04 May 2025 00:40:57 +0000 (0:00:00.216) 0:00:30.945 ************ 2025-05-04 00:40:57.436775 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:57.437003 | orchestrator | 2025-05-04 00:40:57.438507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:57.439177 | orchestrator | Sunday 04 May 2025 00:40:57 +0000 (0:00:00.210) 0:00:31.155 ************ 2025-05-04 00:40:57.649868 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:57.650228 | orchestrator | 2025-05-04 00:40:57.651233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:57.652093 | orchestrator | Sunday 04 May 2025 00:40:57 +0000 (0:00:00.212) 0:00:31.368 ************ 2025-05-04 00:40:57.858750 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:40:57.859420 | orchestrator | 2025-05-04 00:40:57.860097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:57.860808 | orchestrator | Sunday 04 May 2025 00:40:57 +0000 (0:00:00.209) 0:00:31.577 ************ 2025-05-04 00:40:58.549491 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9) 2025-05-04 00:40:58.551289 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9) 2025-05-04 00:40:58.551346 | orchestrator | 2025-05-04 00:40:58.551602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:58.551637 | orchestrator | Sunday 04 May 2025 00:40:58 +0000 (0:00:00.689) 0:00:32.266 ************ 2025-05-04 00:40:59.179006 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d) 2025-05-04 00:40:59.179647 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d) 2025-05-04 00:40:59.180629 | orchestrator | 2025-05-04 00:40:59.181701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:59.184471 | orchestrator | Sunday 04 May 2025 00:40:59 +0000 (0:00:00.629) 0:00:32.896 ************ 2025-05-04 00:40:59.618413 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783) 2025-05-04 00:40:59.619838 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783) 2025-05-04 00:40:59.619893 | orchestrator | 2025-05-04 00:40:59.620795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:40:59.621385 | orchestrator | Sunday 04 May 2025 00:40:59 +0000 (0:00:00.439) 0:00:33.335 ************ 2025-05-04 00:41:00.053904 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123) 2025-05-04 00:41:00.054754 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123) 2025-05-04 00:41:00.057857 | orchestrator | 2025-05-04 00:41:00.414427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:00.414618 | orchestrator | Sunday 04 May 2025 00:41:00 +0000 (0:00:00.435) 0:00:33.771 ************ 2025-05-04 00:41:00.414657 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-04 00:41:00.414804 | orchestrator | 2025-05-04 00:41:00.416024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:00.418465 | orchestrator | Sunday 04 May 2025 00:41:00 +0000 (0:00:00.360) 0:00:34.132 ************ 2025-05-04 00:41:00.833473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-04 00:41:00.834454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-04 00:41:00.834962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-04 00:41:00.837009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-04 00:41:00.838160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-04 00:41:00.839577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-04 00:41:00.840819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-04 00:41:00.841840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-04 00:41:00.842682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-04 00:41:00.843339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-04 00:41:00.844246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-04 00:41:00.845020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-04 00:41:00.846277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-04 00:41:00.846987 | orchestrator | 2025-05-04 00:41:00.847902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:00.848598 | orchestrator | Sunday 04 May 2025 00:41:00 +0000 (0:00:00.420) 0:00:34.552 ************ 2025-05-04 00:41:01.042236 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:01.043173 | orchestrator | 2025-05-04 00:41:01.044220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:01.045351 | orchestrator | Sunday 04 May 2025 00:41:01 +0000 (0:00:00.206) 0:00:34.758 ************ 2025-05-04 00:41:01.258248 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:01.258654 | orchestrator | 2025-05-04 00:41:01.259536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:01.260400 | orchestrator | Sunday 04 May 2025 00:41:01 +0000 (0:00:00.217) 0:00:34.976 ************ 2025-05-04 00:41:01.464970 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:01.465514 | orchestrator | 2025-05-04 00:41:01.468699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:01.688249 | orchestrator | Sunday 04 May 2025 00:41:01 +0000 (0:00:00.205) 0:00:35.182 ************ 2025-05-04 00:41:01.688439 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:01.688521 | orchestrator | 2025-05-04 00:41:01.689755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:01.690176 | orchestrator | Sunday 04 May 2025 00:41:01 +0000 (0:00:00.224) 0:00:35.407 ************ 2025-05-04 00:41:02.380821 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:02.381844 | orchestrator | 2025-05-04 00:41:02.382845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:02.384007 | orchestrator | Sunday 04 May 2025 00:41:02 +0000 (0:00:00.690) 0:00:36.097 ************ 2025-05-04 00:41:02.589585 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:02.590635 | orchestrator | 2025-05-04 00:41:02.591181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:02.591766 | orchestrator | Sunday 04 May 2025 00:41:02 +0000 (0:00:00.209) 0:00:36.307 ************ 2025-05-04 00:41:02.801703 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:02.802286 | orchestrator | 2025-05-04 00:41:02.802335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:02.802864 | orchestrator | Sunday 04 May 2025 00:41:02 +0000 (0:00:00.212) 0:00:36.520 ************ 2025-05-04 00:41:03.049318 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:03.049498 | orchestrator | 2025-05-04 00:41:03.050453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:03.050810 | orchestrator | Sunday 04 May 2025 00:41:03 +0000 (0:00:00.245) 0:00:36.765 ************ 2025-05-04 00:41:03.686996 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-04 00:41:03.687715 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-04 00:41:03.687884 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-04 00:41:03.688160 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-04 00:41:03.690360 | orchestrator | 2025-05-04 00:41:03.691069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:03.893785 | orchestrator | Sunday 04 May 2025 00:41:03 +0000 (0:00:00.637) 0:00:37.403 ************ 2025-05-04 00:41:03.893958 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:03.894127 | orchestrator | 2025-05-04 00:41:04.126797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:04.126957 | orchestrator | Sunday 04 May 2025 00:41:03 +0000 (0:00:00.209) 0:00:37.613 ************ 2025-05-04 00:41:04.127007 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:04.127211 | orchestrator | 2025-05-04 00:41:04.128554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:04.128853 | orchestrator | Sunday 04 May 2025 00:41:04 +0000 (0:00:00.230) 0:00:37.844 ************ 2025-05-04 00:41:04.333660 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:04.335290 | orchestrator | 2025-05-04 00:41:04.336306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:04.338796 | orchestrator | Sunday 04 May 2025 00:41:04 +0000 (0:00:00.208) 0:00:38.052 ************ 2025-05-04 00:41:04.542590 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:04.543020 | orchestrator | 2025-05-04 00:41:04.543067 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-04 00:41:04.546215 | orchestrator | Sunday 04 May 2025 00:41:04 +0000 (0:00:00.208) 0:00:38.260 ************ 2025-05-04 00:41:04.736035 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-04 00:41:04.736498 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-04 00:41:04.737700 | orchestrator | 2025-05-04 00:41:04.738777 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-04 00:41:04.739899 | orchestrator | Sunday 04 May 2025 00:41:04 +0000 (0:00:00.194) 0:00:38.455 ************ 2025-05-04 00:41:04.870194 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:04.870888 | orchestrator | 2025-05-04 00:41:04.872080 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-04 00:41:04.873539 | orchestrator | Sunday 04 May 2025 00:41:04 +0000 (0:00:00.134) 0:00:38.589 ************ 2025-05-04 00:41:05.194375 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:05.195386 | orchestrator | 2025-05-04 00:41:05.196167 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-04 00:41:05.196820 | orchestrator | Sunday 04 May 2025 00:41:05 +0000 (0:00:00.320) 0:00:38.909 ************ 2025-05-04 00:41:05.315115 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:05.316569 | orchestrator | 2025-05-04 00:41:05.317825 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-04 00:41:05.319241 | orchestrator | Sunday 04 May 2025 00:41:05 +0000 (0:00:00.124) 0:00:39.033 ************ 2025-05-04 00:41:05.467235 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:41:05.468862 | orchestrator | 2025-05-04 00:41:05.470130 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-04 00:41:05.470706 | orchestrator | Sunday 04 May 2025 00:41:05 +0000 (0:00:00.152) 0:00:39.186 ************ 2025-05-04 00:41:05.655632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98453abf-c748-514f-aec7-544322a7c940'}}) 2025-05-04 00:41:05.656427 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f54bf35c-9381-504c-8591-afe4d3e61469'}}) 2025-05-04 00:41:05.658374 | orchestrator | 2025-05-04 00:41:05.659807 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-04 00:41:05.660713 | orchestrator | Sunday 04 May 2025 00:41:05 +0000 (0:00:00.187) 0:00:39.373 ************ 2025-05-04 00:41:05.830386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98453abf-c748-514f-aec7-544322a7c940'}})  2025-05-04 00:41:05.830615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f54bf35c-9381-504c-8591-afe4d3e61469'}})  2025-05-04 00:41:05.831878 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:05.832749 | orchestrator | 2025-05-04 00:41:05.833491 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-04 00:41:05.834468 | orchestrator | Sunday 04 May 2025 00:41:05 +0000 (0:00:00.175) 0:00:39.548 ************ 2025-05-04 00:41:06.012985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98453abf-c748-514f-aec7-544322a7c940'}})  2025-05-04 00:41:06.014666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f54bf35c-9381-504c-8591-afe4d3e61469'}})  2025-05-04 00:41:06.015987 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:06.017172 | orchestrator | 2025-05-04 00:41:06.018378 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-04 00:41:06.019465 | orchestrator | Sunday 04 May 2025 00:41:06 +0000 (0:00:00.181) 0:00:39.730 ************ 2025-05-04 00:41:06.180962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98453abf-c748-514f-aec7-544322a7c940'}})  2025-05-04 00:41:06.181643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f54bf35c-9381-504c-8591-afe4d3e61469'}})  2025-05-04 00:41:06.182536 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:06.183661 | orchestrator | 2025-05-04 00:41:06.185448 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-04 00:41:06.348136 | orchestrator | Sunday 04 May 2025 00:41:06 +0000 (0:00:00.169) 0:00:39.899 ************ 2025-05-04 00:41:06.348299 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:41:06.348412 | orchestrator | 2025-05-04 00:41:06.348944 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-04 00:41:06.348985 | orchestrator | Sunday 04 May 2025 00:41:06 +0000 (0:00:00.167) 0:00:40.067 ************ 2025-05-04 00:41:06.498832 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:41:06.499019 | orchestrator | 2025-05-04 00:41:06.499048 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-04 00:41:06.500504 | orchestrator | Sunday 04 May 2025 00:41:06 +0000 (0:00:00.149) 0:00:40.216 ************ 2025-05-04 00:41:06.630859 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:06.632207 | orchestrator | 2025-05-04 00:41:06.632253 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-04 00:41:06.632924 | orchestrator | Sunday 04 May 2025 00:41:06 +0000 (0:00:00.132) 0:00:40.349 ************ 2025-05-04 00:41:06.769547 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:06.770884 | orchestrator | 2025-05-04 00:41:06.772305 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-04 00:41:06.773545 | orchestrator | Sunday 04 May 2025 00:41:06 +0000 (0:00:00.138) 0:00:40.488 ************ 2025-05-04 00:41:07.167357 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:07.168126 | orchestrator | 2025-05-04 00:41:07.169323 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-04 00:41:07.170488 | orchestrator | Sunday 04 May 2025 00:41:07 +0000 (0:00:00.394) 0:00:40.882 ************ 2025-05-04 00:41:07.320981 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 00:41:07.321151 | orchestrator |  "ceph_osd_devices": { 2025-05-04 00:41:07.321679 | orchestrator |  "sdb": { 2025-05-04 00:41:07.322335 | orchestrator |  "osd_lvm_uuid": "98453abf-c748-514f-aec7-544322a7c940" 2025-05-04 00:41:07.323642 | orchestrator |  }, 2025-05-04 00:41:07.324694 | orchestrator |  "sdc": { 2025-05-04 00:41:07.325088 | orchestrator |  "osd_lvm_uuid": "f54bf35c-9381-504c-8591-afe4d3e61469" 2025-05-04 00:41:07.325500 | orchestrator |  } 2025-05-04 00:41:07.327609 | orchestrator |  } 2025-05-04 00:41:07.327874 | orchestrator | } 2025-05-04 00:41:07.328884 | orchestrator | 2025-05-04 00:41:07.329503 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-04 00:41:07.329689 | orchestrator | Sunday 04 May 2025 00:41:07 +0000 (0:00:00.155) 0:00:41.038 ************ 2025-05-04 00:41:07.460338 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:07.460934 | orchestrator | 2025-05-04 00:41:07.461363 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-04 00:41:07.461897 | orchestrator | Sunday 04 May 2025 00:41:07 +0000 (0:00:00.140) 0:00:41.178 ************ 2025-05-04 00:41:07.604654 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:07.604892 | orchestrator | 2025-05-04 00:41:07.605359 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-04 00:41:07.605897 | orchestrator | Sunday 04 May 2025 00:41:07 +0000 (0:00:00.144) 0:00:41.323 ************ 2025-05-04 00:41:07.742653 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:41:07.743972 | orchestrator | 2025-05-04 00:41:07.744494 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-04 00:41:07.745272 | orchestrator | Sunday 04 May 2025 00:41:07 +0000 (0:00:00.136) 0:00:41.460 ************ 2025-05-04 00:41:08.061058 | orchestrator | changed: [testbed-node-5] => { 2025-05-04 00:41:08.061300 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-04 00:41:08.061691 | orchestrator |  "ceph_osd_devices": { 2025-05-04 00:41:08.062325 | orchestrator |  "sdb": { 2025-05-04 00:41:08.062919 | orchestrator |  "osd_lvm_uuid": "98453abf-c748-514f-aec7-544322a7c940" 2025-05-04 00:41:08.063350 | orchestrator |  }, 2025-05-04 00:41:08.064061 | orchestrator |  "sdc": { 2025-05-04 00:41:08.065214 | orchestrator |  "osd_lvm_uuid": "f54bf35c-9381-504c-8591-afe4d3e61469" 2025-05-04 00:41:08.065325 | orchestrator |  } 2025-05-04 00:41:08.065599 | orchestrator |  }, 2025-05-04 00:41:08.066421 | orchestrator |  "lvm_volumes": [ 2025-05-04 00:41:08.066904 | orchestrator |  { 2025-05-04 00:41:08.067652 | orchestrator |  "data": "osd-block-98453abf-c748-514f-aec7-544322a7c940", 2025-05-04 00:41:08.068351 | orchestrator |  "data_vg": "ceph-98453abf-c748-514f-aec7-544322a7c940" 2025-05-04 00:41:08.068579 | orchestrator |  }, 2025-05-04 00:41:08.068672 | orchestrator |  { 2025-05-04 00:41:08.069426 | orchestrator |  "data": "osd-block-f54bf35c-9381-504c-8591-afe4d3e61469", 2025-05-04 00:41:08.069714 | orchestrator |  "data_vg": "ceph-f54bf35c-9381-504c-8591-afe4d3e61469" 2025-05-04 00:41:08.070080 | orchestrator |  } 2025-05-04 00:41:08.071016 | orchestrator |  ] 2025-05-04 00:41:08.071313 | orchestrator |  } 2025-05-04 00:41:08.071881 | orchestrator | } 2025-05-04 00:41:08.072100 | orchestrator | 2025-05-04 00:41:08.072866 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-04 00:41:09.229873 | orchestrator | Sunday 04 May 2025 00:41:08 +0000 (0:00:00.319) 0:00:41.779 ************ 2025-05-04 00:41:09.230097 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-04 00:41:09.231471 | orchestrator | 2025-05-04 00:41:09.233833 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:41:09.233889 | orchestrator | 2025-05-04 00:41:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:41:09.235277 | orchestrator | 2025-05-04 00:41:09 | INFO  | Please wait and do not abort execution. 2025-05-04 00:41:09.235316 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-04 00:41:09.236375 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-04 00:41:09.237469 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-04 00:41:09.238456 | orchestrator | 2025-05-04 00:41:09.239798 | orchestrator | 2025-05-04 00:41:09.241367 | orchestrator | 2025-05-04 00:41:09.242266 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:41:09.243414 | orchestrator | Sunday 04 May 2025 00:41:09 +0000 (0:00:01.167) 0:00:42.946 ************ 2025-05-04 00:41:09.244355 | orchestrator | =============================================================================== 2025-05-04 00:41:09.245352 | orchestrator | Write configuration file ------------------------------------------------ 4.64s 2025-05-04 00:41:09.246897 | orchestrator | Add known links to the list of available block devices ------------------ 1.68s 2025-05-04 00:41:09.247788 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-05-04 00:41:09.248309 | orchestrator | Print configuration data ------------------------------------------------ 1.08s 2025-05-04 00:41:09.249389 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-05-04 00:41:09.249879 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2025-05-04 00:41:09.250325 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.76s 2025-05-04 00:41:09.251014 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.74s 2025-05-04 00:41:09.251643 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-04 00:41:09.252301 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-05-04 00:41:09.252980 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-05-04 00:41:09.253505 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-05-04 00:41:09.254264 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.66s 2025-05-04 00:41:09.254799 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-04 00:41:09.255253 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-04 00:41:09.255903 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-04 00:41:09.256392 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-04 00:41:09.256951 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.61s 2025-05-04 00:41:09.258013 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.54s 2025-05-04 00:41:09.258638 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.53s 2025-05-04 00:41:21.623630 | orchestrator | 2025-05-04 00:41:21 | INFO  | Task 227612a7-a97b-45c1-9014-a46beff83def is running in background. Output coming soon. 2025-05-04 00:41:47.148868 | orchestrator | 2025-05-04 00:41:38 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-04 00:41:48.843491 | orchestrator | 2025-05-04 00:41:38 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-04 00:41:48.843644 | orchestrator | 2025-05-04 00:41:38 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-04 00:41:48.843664 | orchestrator | 2025-05-04 00:41:38 | INFO  | Handling group overwrites in 99-overwrite 2025-05-04 00:41:48.843696 | orchestrator | 2025-05-04 00:41:38 | INFO  | Removing group frr:children from 60-generic 2025-05-04 00:41:48.843711 | orchestrator | 2025-05-04 00:41:38 | INFO  | Removing group storage:children from 50-kolla 2025-05-04 00:41:48.843806 | orchestrator | 2025-05-04 00:41:38 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-04 00:41:48.843827 | orchestrator | 2025-05-04 00:41:38 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-04 00:41:48.843843 | orchestrator | 2025-05-04 00:41:38 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-04 00:41:48.843858 | orchestrator | 2025-05-04 00:41:38 | INFO  | Handling group overwrites in 20-roles 2025-05-04 00:41:48.843873 | orchestrator | 2025-05-04 00:41:38 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-04 00:41:48.843888 | orchestrator | 2025-05-04 00:41:39 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-04 00:41:48.843902 | orchestrator | 2025-05-04 00:41:46 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-04 00:41:48.843935 | orchestrator | 2025-05-04 00:41:48 | INFO  | Task 6f07d683-58eb-4505-b026-f346c90ca164 (ceph-create-lvm-devices) was prepared for execution. 2025-05-04 00:41:51.886992 | orchestrator | 2025-05-04 00:41:48 | INFO  | It takes a moment until task 6f07d683-58eb-4505-b026-f346c90ca164 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-04 00:41:51.887173 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-04 00:41:52.453314 | orchestrator | 2025-05-04 00:41:52.460862 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-04 00:41:52.460912 | orchestrator | 2025-05-04 00:41:52.460939 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-04 00:41:52.690860 | orchestrator | Sunday 04 May 2025 00:41:52 +0000 (0:00:00.496) 0:00:00.496 ************ 2025-05-04 00:41:52.691020 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-04 00:41:52.691831 | orchestrator | 2025-05-04 00:41:52.692912 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-04 00:41:52.698806 | orchestrator | Sunday 04 May 2025 00:41:52 +0000 (0:00:00.240) 0:00:00.736 ************ 2025-05-04 00:41:52.917620 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:41:52.921992 | orchestrator | 2025-05-04 00:41:52.922328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:53.635298 | orchestrator | Sunday 04 May 2025 00:41:52 +0000 (0:00:00.226) 0:00:00.963 ************ 2025-05-04 00:41:53.635473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-04 00:41:53.637452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-04 00:41:53.637817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-04 00:41:53.638628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-04 00:41:53.640127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-04 00:41:53.640933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-04 00:41:53.642470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-04 00:41:53.643792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-04 00:41:53.645242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-04 00:41:53.646509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-04 00:41:53.647781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-04 00:41:53.648479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-04 00:41:53.649466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-04 00:41:53.650559 | orchestrator | 2025-05-04 00:41:53.651497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:53.652126 | orchestrator | Sunday 04 May 2025 00:41:53 +0000 (0:00:00.716) 0:00:01.679 ************ 2025-05-04 00:41:53.850211 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:53.850588 | orchestrator | 2025-05-04 00:41:53.851976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:53.854871 | orchestrator | Sunday 04 May 2025 00:41:53 +0000 (0:00:00.215) 0:00:01.895 ************ 2025-05-04 00:41:54.063406 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:54.066315 | orchestrator | 2025-05-04 00:41:54.066628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:54.066677 | orchestrator | Sunday 04 May 2025 00:41:54 +0000 (0:00:00.210) 0:00:02.106 ************ 2025-05-04 00:41:54.263032 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:54.263634 | orchestrator | 2025-05-04 00:41:54.264080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:54.265245 | orchestrator | Sunday 04 May 2025 00:41:54 +0000 (0:00:00.201) 0:00:02.308 ************ 2025-05-04 00:41:54.470730 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:54.472010 | orchestrator | 2025-05-04 00:41:54.472832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:54.479041 | orchestrator | Sunday 04 May 2025 00:41:54 +0000 (0:00:00.208) 0:00:02.516 ************ 2025-05-04 00:41:54.677296 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:54.678009 | orchestrator | 2025-05-04 00:41:54.681164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:54.893633 | orchestrator | Sunday 04 May 2025 00:41:54 +0000 (0:00:00.205) 0:00:02.721 ************ 2025-05-04 00:41:54.893821 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:54.894862 | orchestrator | 2025-05-04 00:41:54.898821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:54.899188 | orchestrator | Sunday 04 May 2025 00:41:54 +0000 (0:00:00.217) 0:00:02.939 ************ 2025-05-04 00:41:55.093066 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:55.095877 | orchestrator | 2025-05-04 00:41:55.096004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:55.298003 | orchestrator | Sunday 04 May 2025 00:41:55 +0000 (0:00:00.197) 0:00:03.136 ************ 2025-05-04 00:41:55.298269 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:55.298823 | orchestrator | 2025-05-04 00:41:55.305403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:55.899370 | orchestrator | Sunday 04 May 2025 00:41:55 +0000 (0:00:00.206) 0:00:03.343 ************ 2025-05-04 00:41:55.899584 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6) 2025-05-04 00:41:55.900713 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6) 2025-05-04 00:41:55.906956 | orchestrator | 2025-05-04 00:41:56.727343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:56.727476 | orchestrator | Sunday 04 May 2025 00:41:55 +0000 (0:00:00.600) 0:00:03.944 ************ 2025-05-04 00:41:56.727513 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7) 2025-05-04 00:41:56.728325 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7) 2025-05-04 00:41:56.732168 | orchestrator | 2025-05-04 00:41:56.732827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:56.734721 | orchestrator | Sunday 04 May 2025 00:41:56 +0000 (0:00:00.828) 0:00:04.772 ************ 2025-05-04 00:41:57.185939 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93) 2025-05-04 00:41:57.186677 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93) 2025-05-04 00:41:57.187797 | orchestrator | 2025-05-04 00:41:57.190546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:57.190787 | orchestrator | Sunday 04 May 2025 00:41:57 +0000 (0:00:00.458) 0:00:05.230 ************ 2025-05-04 00:41:57.622605 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc) 2025-05-04 00:41:57.623019 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc) 2025-05-04 00:41:57.623952 | orchestrator | 2025-05-04 00:41:57.627980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:41:57.628449 | orchestrator | Sunday 04 May 2025 00:41:57 +0000 (0:00:00.436) 0:00:05.667 ************ 2025-05-04 00:41:57.960600 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-04 00:41:57.961187 | orchestrator | 2025-05-04 00:41:57.961496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:57.965177 | orchestrator | Sunday 04 May 2025 00:41:57 +0000 (0:00:00.337) 0:00:06.004 ************ 2025-05-04 00:41:58.456377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-04 00:41:58.457909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-04 00:41:58.459273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-04 00:41:58.461898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-04 00:41:58.462314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-04 00:41:58.462351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-04 00:41:58.463562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-04 00:41:58.464211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-04 00:41:58.465496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-04 00:41:58.466449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-04 00:41:58.467616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-04 00:41:58.468331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-04 00:41:58.469304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-04 00:41:58.470927 | orchestrator | 2025-05-04 00:41:58.471953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:58.472693 | orchestrator | Sunday 04 May 2025 00:41:58 +0000 (0:00:00.498) 0:00:06.502 ************ 2025-05-04 00:41:58.683841 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:58.885227 | orchestrator | 2025-05-04 00:41:58.885360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:58.885379 | orchestrator | Sunday 04 May 2025 00:41:58 +0000 (0:00:00.221) 0:00:06.724 ************ 2025-05-04 00:41:58.885416 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:58.885534 | orchestrator | 2025-05-04 00:41:58.886405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:58.887413 | orchestrator | Sunday 04 May 2025 00:41:58 +0000 (0:00:00.206) 0:00:06.930 ************ 2025-05-04 00:41:59.104107 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:59.104740 | orchestrator | 2025-05-04 00:41:59.105808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:59.107040 | orchestrator | Sunday 04 May 2025 00:41:59 +0000 (0:00:00.220) 0:00:07.151 ************ 2025-05-04 00:41:59.301458 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:59.302204 | orchestrator | 2025-05-04 00:41:59.302249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:59.302801 | orchestrator | Sunday 04 May 2025 00:41:59 +0000 (0:00:00.195) 0:00:07.347 ************ 2025-05-04 00:41:59.952268 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:41:59.952624 | orchestrator | 2025-05-04 00:41:59.953819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:41:59.954810 | orchestrator | Sunday 04 May 2025 00:41:59 +0000 (0:00:00.649) 0:00:07.996 ************ 2025-05-04 00:42:00.157723 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:00.158003 | orchestrator | 2025-05-04 00:42:00.158845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:00.161569 | orchestrator | Sunday 04 May 2025 00:42:00 +0000 (0:00:00.206) 0:00:08.202 ************ 2025-05-04 00:42:00.360250 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:00.360714 | orchestrator | 2025-05-04 00:42:00.362347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:00.364387 | orchestrator | Sunday 04 May 2025 00:42:00 +0000 (0:00:00.203) 0:00:08.406 ************ 2025-05-04 00:42:00.560289 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:00.561067 | orchestrator | 2025-05-04 00:42:00.562331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:00.565039 | orchestrator | Sunday 04 May 2025 00:42:00 +0000 (0:00:00.200) 0:00:08.606 ************ 2025-05-04 00:42:01.242355 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-04 00:42:01.242560 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-04 00:42:01.243535 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-04 00:42:01.244508 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-04 00:42:01.245343 | orchestrator | 2025-05-04 00:42:01.245936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:01.246581 | orchestrator | Sunday 04 May 2025 00:42:01 +0000 (0:00:00.677) 0:00:09.284 ************ 2025-05-04 00:42:01.446832 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:01.447870 | orchestrator | 2025-05-04 00:42:01.448688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:01.449907 | orchestrator | Sunday 04 May 2025 00:42:01 +0000 (0:00:00.206) 0:00:09.490 ************ 2025-05-04 00:42:01.635303 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:01.636340 | orchestrator | 2025-05-04 00:42:01.638627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:01.639846 | orchestrator | Sunday 04 May 2025 00:42:01 +0000 (0:00:00.190) 0:00:09.681 ************ 2025-05-04 00:42:01.848338 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:01.849538 | orchestrator | 2025-05-04 00:42:01.849596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:01.850279 | orchestrator | Sunday 04 May 2025 00:42:01 +0000 (0:00:00.213) 0:00:09.894 ************ 2025-05-04 00:42:02.047022 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:02.047255 | orchestrator | 2025-05-04 00:42:02.047890 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-04 00:42:02.049245 | orchestrator | Sunday 04 May 2025 00:42:02 +0000 (0:00:00.199) 0:00:10.093 ************ 2025-05-04 00:42:02.186837 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:02.187043 | orchestrator | 2025-05-04 00:42:02.187698 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-04 00:42:02.188984 | orchestrator | Sunday 04 May 2025 00:42:02 +0000 (0:00:00.140) 0:00:10.233 ************ 2025-05-04 00:42:02.433014 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c91b3cb6-7edb-5452-ada6-d38ce882942b'}}) 2025-05-04 00:42:02.433190 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}}) 2025-05-04 00:42:02.433975 | orchestrator | 2025-05-04 00:42:02.434464 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-04 00:42:02.434999 | orchestrator | Sunday 04 May 2025 00:42:02 +0000 (0:00:00.244) 0:00:10.478 ************ 2025-05-04 00:42:04.751521 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'}) 2025-05-04 00:42:04.751741 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}) 2025-05-04 00:42:04.752735 | orchestrator | 2025-05-04 00:42:04.753504 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-04 00:42:04.755055 | orchestrator | Sunday 04 May 2025 00:42:04 +0000 (0:00:02.317) 0:00:12.795 ************ 2025-05-04 00:42:04.933677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:04.933920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:04.934715 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:04.935232 | orchestrator | 2025-05-04 00:42:04.935835 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-04 00:42:04.936404 | orchestrator | Sunday 04 May 2025 00:42:04 +0000 (0:00:00.184) 0:00:12.979 ************ 2025-05-04 00:42:06.447646 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'}) 2025-05-04 00:42:06.447888 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}) 2025-05-04 00:42:06.449499 | orchestrator | 2025-05-04 00:42:06.450104 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-04 00:42:06.451329 | orchestrator | Sunday 04 May 2025 00:42:06 +0000 (0:00:01.511) 0:00:14.491 ************ 2025-05-04 00:42:06.623364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:06.624146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:06.624970 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:06.626686 | orchestrator | 2025-05-04 00:42:06.628547 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-04 00:42:06.628814 | orchestrator | Sunday 04 May 2025 00:42:06 +0000 (0:00:00.177) 0:00:14.668 ************ 2025-05-04 00:42:06.780686 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:06.782093 | orchestrator | 2025-05-04 00:42:06.954241 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-04 00:42:06.954346 | orchestrator | Sunday 04 May 2025 00:42:06 +0000 (0:00:00.158) 0:00:14.826 ************ 2025-05-04 00:42:06.954378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:06.954441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:06.957335 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:06.960148 | orchestrator | 2025-05-04 00:42:06.960187 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-04 00:42:06.960923 | orchestrator | Sunday 04 May 2025 00:42:06 +0000 (0:00:00.171) 0:00:14.998 ************ 2025-05-04 00:42:07.087313 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:07.087578 | orchestrator | 2025-05-04 00:42:07.087618 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-04 00:42:07.088586 | orchestrator | Sunday 04 May 2025 00:42:07 +0000 (0:00:00.134) 0:00:15.133 ************ 2025-05-04 00:42:07.253176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:07.253782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:07.253844 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:07.255089 | orchestrator | 2025-05-04 00:42:07.256113 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-04 00:42:07.256897 | orchestrator | Sunday 04 May 2025 00:42:07 +0000 (0:00:00.165) 0:00:15.298 ************ 2025-05-04 00:42:07.574531 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:07.575652 | orchestrator | 2025-05-04 00:42:07.577014 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-04 00:42:07.578123 | orchestrator | Sunday 04 May 2025 00:42:07 +0000 (0:00:00.320) 0:00:15.619 ************ 2025-05-04 00:42:07.734786 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:07.735843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:07.737962 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:07.738514 | orchestrator | 2025-05-04 00:42:07.738549 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-04 00:42:07.739501 | orchestrator | Sunday 04 May 2025 00:42:07 +0000 (0:00:00.161) 0:00:15.780 ************ 2025-05-04 00:42:07.873335 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:07.873847 | orchestrator | 2025-05-04 00:42:07.874822 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-04 00:42:07.876303 | orchestrator | Sunday 04 May 2025 00:42:07 +0000 (0:00:00.138) 0:00:15.919 ************ 2025-05-04 00:42:08.064029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:08.064992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:08.066129 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:08.067096 | orchestrator | 2025-05-04 00:42:08.069493 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-04 00:42:08.242813 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.190) 0:00:16.109 ************ 2025-05-04 00:42:08.242965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:08.243580 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:08.243620 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:08.245014 | orchestrator | 2025-05-04 00:42:08.245373 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-04 00:42:08.246202 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.178) 0:00:16.287 ************ 2025-05-04 00:42:08.414572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:08.415902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:08.416276 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:08.417056 | orchestrator | 2025-05-04 00:42:08.417301 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-04 00:42:08.417796 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.174) 0:00:16.462 ************ 2025-05-04 00:42:08.555914 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:08.556136 | orchestrator | 2025-05-04 00:42:08.556570 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-04 00:42:08.557542 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.140) 0:00:16.603 ************ 2025-05-04 00:42:08.698153 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:08.698554 | orchestrator | 2025-05-04 00:42:08.702277 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-04 00:42:08.710286 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.139) 0:00:16.742 ************ 2025-05-04 00:42:08.842960 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:08.843278 | orchestrator | 2025-05-04 00:42:08.844190 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-04 00:42:08.844840 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.147) 0:00:16.889 ************ 2025-05-04 00:42:09.001109 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 00:42:09.001415 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-04 00:42:09.004432 | orchestrator | } 2025-05-04 00:42:09.005896 | orchestrator | 2025-05-04 00:42:09.007061 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-04 00:42:09.007680 | orchestrator | Sunday 04 May 2025 00:42:08 +0000 (0:00:00.155) 0:00:17.044 ************ 2025-05-04 00:42:09.148950 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 00:42:09.149417 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-04 00:42:09.151067 | orchestrator | } 2025-05-04 00:42:09.152493 | orchestrator | 2025-05-04 00:42:09.153376 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-04 00:42:09.154372 | orchestrator | Sunday 04 May 2025 00:42:09 +0000 (0:00:00.150) 0:00:17.195 ************ 2025-05-04 00:42:09.278645 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 00:42:09.280904 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-04 00:42:09.283797 | orchestrator | } 2025-05-04 00:42:09.284285 | orchestrator | 2025-05-04 00:42:09.284319 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-04 00:42:09.284342 | orchestrator | Sunday 04 May 2025 00:42:09 +0000 (0:00:00.129) 0:00:17.325 ************ 2025-05-04 00:42:10.171251 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:10.171899 | orchestrator | 2025-05-04 00:42:10.171949 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-04 00:42:10.172946 | orchestrator | Sunday 04 May 2025 00:42:10 +0000 (0:00:00.891) 0:00:18.217 ************ 2025-05-04 00:42:10.676008 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:10.677040 | orchestrator | 2025-05-04 00:42:10.678630 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-04 00:42:10.679180 | orchestrator | Sunday 04 May 2025 00:42:10 +0000 (0:00:00.505) 0:00:18.722 ************ 2025-05-04 00:42:11.174660 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:11.176158 | orchestrator | 2025-05-04 00:42:11.176628 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-04 00:42:11.176664 | orchestrator | Sunday 04 May 2025 00:42:11 +0000 (0:00:00.498) 0:00:19.220 ************ 2025-05-04 00:42:11.364019 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:11.364586 | orchestrator | 2025-05-04 00:42:11.365532 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-04 00:42:11.366111 | orchestrator | Sunday 04 May 2025 00:42:11 +0000 (0:00:00.189) 0:00:19.409 ************ 2025-05-04 00:42:11.484182 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:11.484403 | orchestrator | 2025-05-04 00:42:11.484886 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-04 00:42:11.485534 | orchestrator | Sunday 04 May 2025 00:42:11 +0000 (0:00:00.120) 0:00:19.530 ************ 2025-05-04 00:42:11.594684 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:11.594915 | orchestrator | 2025-05-04 00:42:11.595317 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-04 00:42:11.596101 | orchestrator | Sunday 04 May 2025 00:42:11 +0000 (0:00:00.110) 0:00:19.641 ************ 2025-05-04 00:42:11.747514 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 00:42:11.747972 | orchestrator |  "vgs_report": { 2025-05-04 00:42:11.748017 | orchestrator |  "vg": [] 2025-05-04 00:42:11.748601 | orchestrator |  } 2025-05-04 00:42:11.749372 | orchestrator | } 2025-05-04 00:42:11.749817 | orchestrator | 2025-05-04 00:42:11.750501 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-04 00:42:11.751198 | orchestrator | Sunday 04 May 2025 00:42:11 +0000 (0:00:00.151) 0:00:19.793 ************ 2025-05-04 00:42:11.887272 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:11.889297 | orchestrator | 2025-05-04 00:42:11.890381 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-04 00:42:11.891165 | orchestrator | Sunday 04 May 2025 00:42:11 +0000 (0:00:00.139) 0:00:19.932 ************ 2025-05-04 00:42:12.038874 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:12.039801 | orchestrator | 2025-05-04 00:42:12.040440 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-04 00:42:12.041714 | orchestrator | Sunday 04 May 2025 00:42:12 +0000 (0:00:00.151) 0:00:20.084 ************ 2025-05-04 00:42:12.169345 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:12.169832 | orchestrator | 2025-05-04 00:42:12.169966 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-04 00:42:12.170564 | orchestrator | Sunday 04 May 2025 00:42:12 +0000 (0:00:00.130) 0:00:20.215 ************ 2025-05-04 00:42:12.535684 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:12.536347 | orchestrator | 2025-05-04 00:42:12.537779 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-04 00:42:12.538623 | orchestrator | Sunday 04 May 2025 00:42:12 +0000 (0:00:00.366) 0:00:20.582 ************ 2025-05-04 00:42:12.685128 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:12.685304 | orchestrator | 2025-05-04 00:42:12.686539 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-04 00:42:12.687951 | orchestrator | Sunday 04 May 2025 00:42:12 +0000 (0:00:00.150) 0:00:20.732 ************ 2025-05-04 00:42:12.826510 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:12.826736 | orchestrator | 2025-05-04 00:42:12.827693 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-04 00:42:12.828458 | orchestrator | Sunday 04 May 2025 00:42:12 +0000 (0:00:00.140) 0:00:20.873 ************ 2025-05-04 00:42:12.958212 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:12.959189 | orchestrator | 2025-05-04 00:42:12.959631 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-04 00:42:12.959667 | orchestrator | Sunday 04 May 2025 00:42:12 +0000 (0:00:00.132) 0:00:21.005 ************ 2025-05-04 00:42:13.107969 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.108171 | orchestrator | 2025-05-04 00:42:13.108794 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-04 00:42:13.109452 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.148) 0:00:21.153 ************ 2025-05-04 00:42:13.241124 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.241607 | orchestrator | 2025-05-04 00:42:13.242249 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-04 00:42:13.242745 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.134) 0:00:21.288 ************ 2025-05-04 00:42:13.395151 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.395473 | orchestrator | 2025-05-04 00:42:13.396014 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-04 00:42:13.396780 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.153) 0:00:21.441 ************ 2025-05-04 00:42:13.544633 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.545287 | orchestrator | 2025-05-04 00:42:13.545501 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-04 00:42:13.546156 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.149) 0:00:21.591 ************ 2025-05-04 00:42:13.707302 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.707509 | orchestrator | 2025-05-04 00:42:13.708012 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-04 00:42:13.708939 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.162) 0:00:21.753 ************ 2025-05-04 00:42:13.849683 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.850101 | orchestrator | 2025-05-04 00:42:13.850465 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-04 00:42:13.851567 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.142) 0:00:21.896 ************ 2025-05-04 00:42:13.990238 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:13.991393 | orchestrator | 2025-05-04 00:42:13.992209 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-04 00:42:13.995857 | orchestrator | Sunday 04 May 2025 00:42:13 +0000 (0:00:00.140) 0:00:22.036 ************ 2025-05-04 00:42:14.185030 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:14.185474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:14.187145 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:14.187888 | orchestrator | 2025-05-04 00:42:14.188481 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-04 00:42:14.189446 | orchestrator | Sunday 04 May 2025 00:42:14 +0000 (0:00:00.194) 0:00:22.231 ************ 2025-05-04 00:42:14.647020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:14.647651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:14.648181 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:14.649086 | orchestrator | 2025-05-04 00:42:14.649884 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-04 00:42:14.650984 | orchestrator | Sunday 04 May 2025 00:42:14 +0000 (0:00:00.458) 0:00:22.690 ************ 2025-05-04 00:42:14.828658 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:14.829575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:14.829611 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:14.830432 | orchestrator | 2025-05-04 00:42:14.830635 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-04 00:42:14.831677 | orchestrator | Sunday 04 May 2025 00:42:14 +0000 (0:00:00.186) 0:00:22.876 ************ 2025-05-04 00:42:15.012229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:15.012696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:15.012814 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:15.012891 | orchestrator | 2025-05-04 00:42:15.013234 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-04 00:42:15.013656 | orchestrator | Sunday 04 May 2025 00:42:15 +0000 (0:00:00.182) 0:00:23.058 ************ 2025-05-04 00:42:15.192215 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:15.195491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:15.195546 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:15.195572 | orchestrator | 2025-05-04 00:42:15.361475 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-04 00:42:15.361604 | orchestrator | Sunday 04 May 2025 00:42:15 +0000 (0:00:00.177) 0:00:23.236 ************ 2025-05-04 00:42:15.361642 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:15.362107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:15.362894 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:15.363558 | orchestrator | 2025-05-04 00:42:15.364241 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-04 00:42:15.364967 | orchestrator | Sunday 04 May 2025 00:42:15 +0000 (0:00:00.171) 0:00:23.408 ************ 2025-05-04 00:42:15.568461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:15.568708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:15.570123 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:15.570158 | orchestrator | 2025-05-04 00:42:15.570182 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-04 00:42:15.570953 | orchestrator | Sunday 04 May 2025 00:42:15 +0000 (0:00:00.207) 0:00:23.615 ************ 2025-05-04 00:42:15.742428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:15.742942 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:15.743533 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:15.746280 | orchestrator | 2025-05-04 00:42:15.746730 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-04 00:42:15.747083 | orchestrator | Sunday 04 May 2025 00:42:15 +0000 (0:00:00.171) 0:00:23.786 ************ 2025-05-04 00:42:16.236237 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:16.236610 | orchestrator | 2025-05-04 00:42:16.236656 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-04 00:42:16.237150 | orchestrator | Sunday 04 May 2025 00:42:16 +0000 (0:00:00.494) 0:00:24.281 ************ 2025-05-04 00:42:16.746891 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:16.907215 | orchestrator | 2025-05-04 00:42:16.907400 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-04 00:42:16.907423 | orchestrator | Sunday 04 May 2025 00:42:16 +0000 (0:00:00.510) 0:00:24.792 ************ 2025-05-04 00:42:16.907455 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:42:16.907531 | orchestrator | 2025-05-04 00:42:16.907907 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-04 00:42:16.908753 | orchestrator | Sunday 04 May 2025 00:42:16 +0000 (0:00:00.161) 0:00:24.954 ************ 2025-05-04 00:42:17.084342 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'vg_name': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}) 2025-05-04 00:42:17.084568 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'vg_name': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'}) 2025-05-04 00:42:17.085105 | orchestrator | 2025-05-04 00:42:17.085605 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-04 00:42:17.086343 | orchestrator | Sunday 04 May 2025 00:42:17 +0000 (0:00:00.176) 0:00:25.130 ************ 2025-05-04 00:42:17.472694 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:17.472962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:17.473609 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:17.474357 | orchestrator | 2025-05-04 00:42:17.474788 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-04 00:42:17.477988 | orchestrator | Sunday 04 May 2025 00:42:17 +0000 (0:00:00.387) 0:00:25.518 ************ 2025-05-04 00:42:17.649242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:17.649537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:17.650009 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:17.650503 | orchestrator | 2025-05-04 00:42:17.651017 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-04 00:42:17.651740 | orchestrator | Sunday 04 May 2025 00:42:17 +0000 (0:00:00.177) 0:00:25.695 ************ 2025-05-04 00:42:17.825443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'})  2025-05-04 00:42:17.827279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'})  2025-05-04 00:42:17.828176 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:42:17.829894 | orchestrator | 2025-05-04 00:42:17.831231 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-04 00:42:17.831693 | orchestrator | Sunday 04 May 2025 00:42:17 +0000 (0:00:00.174) 0:00:25.869 ************ 2025-05-04 00:42:18.739700 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 00:42:18.740799 | orchestrator |  "lvm_report": { 2025-05-04 00:42:18.742163 | orchestrator |  "lv": [ 2025-05-04 00:42:18.743048 | orchestrator |  { 2025-05-04 00:42:18.744151 | orchestrator |  "lv_name": "osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d", 2025-05-04 00:42:18.747006 | orchestrator |  "vg_name": "ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d" 2025-05-04 00:42:18.747374 | orchestrator |  }, 2025-05-04 00:42:18.747422 | orchestrator |  { 2025-05-04 00:42:18.748293 | orchestrator |  "lv_name": "osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b", 2025-05-04 00:42:18.749397 | orchestrator |  "vg_name": "ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b" 2025-05-04 00:42:18.749907 | orchestrator |  } 2025-05-04 00:42:18.750466 | orchestrator |  ], 2025-05-04 00:42:18.751006 | orchestrator |  "pv": [ 2025-05-04 00:42:18.751902 | orchestrator |  { 2025-05-04 00:42:18.752054 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-04 00:42:18.752652 | orchestrator |  "vg_name": "ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b" 2025-05-04 00:42:18.753604 | orchestrator |  }, 2025-05-04 00:42:18.753872 | orchestrator |  { 2025-05-04 00:42:18.754217 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-04 00:42:18.754844 | orchestrator |  "vg_name": "ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d" 2025-05-04 00:42:18.755076 | orchestrator |  } 2025-05-04 00:42:18.755680 | orchestrator |  ] 2025-05-04 00:42:18.756006 | orchestrator |  } 2025-05-04 00:42:18.756287 | orchestrator | } 2025-05-04 00:42:18.756652 | orchestrator | 2025-05-04 00:42:18.757121 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-04 00:42:18.757402 | orchestrator | 2025-05-04 00:42:18.757707 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-04 00:42:18.758198 | orchestrator | Sunday 04 May 2025 00:42:18 +0000 (0:00:00.915) 0:00:26.785 ************ 2025-05-04 00:42:18.993963 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-04 00:42:18.994912 | orchestrator | 2025-05-04 00:42:18.996379 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-04 00:42:18.996817 | orchestrator | Sunday 04 May 2025 00:42:18 +0000 (0:00:00.253) 0:00:27.038 ************ 2025-05-04 00:42:19.254276 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:19.255214 | orchestrator | 2025-05-04 00:42:19.256511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:19.256552 | orchestrator | Sunday 04 May 2025 00:42:19 +0000 (0:00:00.252) 0:00:27.291 ************ 2025-05-04 00:42:19.710685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-04 00:42:19.711372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-04 00:42:19.712607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-04 00:42:19.713069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-04 00:42:19.713733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-04 00:42:19.714586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-04 00:42:19.715251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-04 00:42:19.715751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-04 00:42:19.716183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-04 00:42:19.717841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-04 00:42:19.718824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-04 00:42:19.718854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-04 00:42:19.718876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-04 00:42:19.719360 | orchestrator | 2025-05-04 00:42:19.719385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:19.719408 | orchestrator | Sunday 04 May 2025 00:42:19 +0000 (0:00:00.466) 0:00:27.758 ************ 2025-05-04 00:42:19.927915 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:19.930231 | orchestrator | 2025-05-04 00:42:19.930611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:19.930678 | orchestrator | Sunday 04 May 2025 00:42:19 +0000 (0:00:00.213) 0:00:27.971 ************ 2025-05-04 00:42:20.143564 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:20.144019 | orchestrator | 2025-05-04 00:42:20.144315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:20.145211 | orchestrator | Sunday 04 May 2025 00:42:20 +0000 (0:00:00.217) 0:00:28.189 ************ 2025-05-04 00:42:20.352720 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:20.353972 | orchestrator | 2025-05-04 00:42:20.354393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:20.354433 | orchestrator | Sunday 04 May 2025 00:42:20 +0000 (0:00:00.209) 0:00:28.399 ************ 2025-05-04 00:42:20.557100 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:20.558353 | orchestrator | 2025-05-04 00:42:20.559218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:20.559984 | orchestrator | Sunday 04 May 2025 00:42:20 +0000 (0:00:00.205) 0:00:28.604 ************ 2025-05-04 00:42:20.757471 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:20.758373 | orchestrator | 2025-05-04 00:42:20.759561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:20.760578 | orchestrator | Sunday 04 May 2025 00:42:20 +0000 (0:00:00.200) 0:00:28.804 ************ 2025-05-04 00:42:20.954606 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:20.955350 | orchestrator | 2025-05-04 00:42:20.956230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:20.957114 | orchestrator | Sunday 04 May 2025 00:42:20 +0000 (0:00:00.196) 0:00:29.001 ************ 2025-05-04 00:42:21.582694 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:21.582962 | orchestrator | 2025-05-04 00:42:21.584082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:21.585190 | orchestrator | Sunday 04 May 2025 00:42:21 +0000 (0:00:00.626) 0:00:29.628 ************ 2025-05-04 00:42:21.789645 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:21.790077 | orchestrator | 2025-05-04 00:42:21.790752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:21.791692 | orchestrator | Sunday 04 May 2025 00:42:21 +0000 (0:00:00.208) 0:00:29.836 ************ 2025-05-04 00:42:22.216581 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47) 2025-05-04 00:42:22.217274 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47) 2025-05-04 00:42:22.217359 | orchestrator | 2025-05-04 00:42:22.217964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:22.218536 | orchestrator | Sunday 04 May 2025 00:42:22 +0000 (0:00:00.422) 0:00:30.259 ************ 2025-05-04 00:42:22.682620 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef) 2025-05-04 00:42:22.685259 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef) 2025-05-04 00:42:22.687848 | orchestrator | 2025-05-04 00:42:22.688177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:22.688210 | orchestrator | Sunday 04 May 2025 00:42:22 +0000 (0:00:00.469) 0:00:30.729 ************ 2025-05-04 00:42:23.120838 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254) 2025-05-04 00:42:23.121052 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254) 2025-05-04 00:42:23.121084 | orchestrator | 2025-05-04 00:42:23.121858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:23.122239 | orchestrator | Sunday 04 May 2025 00:42:23 +0000 (0:00:00.438) 0:00:31.167 ************ 2025-05-04 00:42:23.565029 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f) 2025-05-04 00:42:23.567356 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f) 2025-05-04 00:42:23.567406 | orchestrator | 2025-05-04 00:42:23.568004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:23.568525 | orchestrator | Sunday 04 May 2025 00:42:23 +0000 (0:00:00.441) 0:00:31.609 ************ 2025-05-04 00:42:23.900101 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-04 00:42:23.900370 | orchestrator | 2025-05-04 00:42:23.902916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:23.902983 | orchestrator | Sunday 04 May 2025 00:42:23 +0000 (0:00:00.337) 0:00:31.947 ************ 2025-05-04 00:42:24.372871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-04 00:42:24.373067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-04 00:42:24.374092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-04 00:42:24.374639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-04 00:42:24.375131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-04 00:42:24.375716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-04 00:42:24.377525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-04 00:42:24.378100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-04 00:42:24.378628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-04 00:42:24.379896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-04 00:42:24.380402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-04 00:42:24.380734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-04 00:42:24.381122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-04 00:42:24.381667 | orchestrator | 2025-05-04 00:42:24.381904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:24.382505 | orchestrator | Sunday 04 May 2025 00:42:24 +0000 (0:00:00.470) 0:00:32.418 ************ 2025-05-04 00:42:24.583385 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:24.583616 | orchestrator | 2025-05-04 00:42:24.584560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:24.585799 | orchestrator | Sunday 04 May 2025 00:42:24 +0000 (0:00:00.211) 0:00:32.630 ************ 2025-05-04 00:42:25.189195 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:25.189477 | orchestrator | 2025-05-04 00:42:25.191016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:25.192096 | orchestrator | Sunday 04 May 2025 00:42:25 +0000 (0:00:00.605) 0:00:33.235 ************ 2025-05-04 00:42:25.394416 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:25.395007 | orchestrator | 2025-05-04 00:42:25.395956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:25.396973 | orchestrator | Sunday 04 May 2025 00:42:25 +0000 (0:00:00.203) 0:00:33.438 ************ 2025-05-04 00:42:25.598668 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:25.599431 | orchestrator | 2025-05-04 00:42:25.599705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:25.601111 | orchestrator | Sunday 04 May 2025 00:42:25 +0000 (0:00:00.206) 0:00:33.645 ************ 2025-05-04 00:42:25.817149 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:25.817685 | orchestrator | 2025-05-04 00:42:25.819156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:25.819963 | orchestrator | Sunday 04 May 2025 00:42:25 +0000 (0:00:00.217) 0:00:33.863 ************ 2025-05-04 00:42:26.030242 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:26.030424 | orchestrator | 2025-05-04 00:42:26.032044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:26.033009 | orchestrator | Sunday 04 May 2025 00:42:26 +0000 (0:00:00.213) 0:00:34.077 ************ 2025-05-04 00:42:26.257613 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:26.257876 | orchestrator | 2025-05-04 00:42:26.257906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:26.257941 | orchestrator | Sunday 04 May 2025 00:42:26 +0000 (0:00:00.223) 0:00:34.301 ************ 2025-05-04 00:42:26.465035 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:26.466226 | orchestrator | 2025-05-04 00:42:26.466994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:26.468622 | orchestrator | Sunday 04 May 2025 00:42:26 +0000 (0:00:00.210) 0:00:34.511 ************ 2025-05-04 00:42:27.340920 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-04 00:42:27.341124 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-04 00:42:27.342885 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-04 00:42:27.343700 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-04 00:42:27.343735 | orchestrator | 2025-05-04 00:42:27.344519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:27.345171 | orchestrator | Sunday 04 May 2025 00:42:27 +0000 (0:00:00.872) 0:00:35.384 ************ 2025-05-04 00:42:27.546706 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:27.546924 | orchestrator | 2025-05-04 00:42:27.547692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:27.548887 | orchestrator | Sunday 04 May 2025 00:42:27 +0000 (0:00:00.208) 0:00:35.593 ************ 2025-05-04 00:42:27.739200 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:27.739437 | orchestrator | 2025-05-04 00:42:27.739811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:27.740197 | orchestrator | Sunday 04 May 2025 00:42:27 +0000 (0:00:00.192) 0:00:35.786 ************ 2025-05-04 00:42:28.369980 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:28.370378 | orchestrator | 2025-05-04 00:42:28.370811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:28.371240 | orchestrator | Sunday 04 May 2025 00:42:28 +0000 (0:00:00.629) 0:00:36.415 ************ 2025-05-04 00:42:28.574098 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:28.574484 | orchestrator | 2025-05-04 00:42:28.575176 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-04 00:42:28.576182 | orchestrator | Sunday 04 May 2025 00:42:28 +0000 (0:00:00.205) 0:00:36.621 ************ 2025-05-04 00:42:28.715863 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:28.716249 | orchestrator | 2025-05-04 00:42:28.716296 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-04 00:42:28.717010 | orchestrator | Sunday 04 May 2025 00:42:28 +0000 (0:00:00.140) 0:00:36.761 ************ 2025-05-04 00:42:28.919226 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}}) 2025-05-04 00:42:28.919454 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5e087d3a-1c7d-5e62-b576-6c121f884fde'}}) 2025-05-04 00:42:28.921045 | orchestrator | 2025-05-04 00:42:28.922480 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-04 00:42:28.924342 | orchestrator | Sunday 04 May 2025 00:42:28 +0000 (0:00:00.204) 0:00:36.966 ************ 2025-05-04 00:42:30.791919 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}) 2025-05-04 00:42:30.792159 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'}) 2025-05-04 00:42:30.793636 | orchestrator | 2025-05-04 00:42:30.793912 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-04 00:42:30.794559 | orchestrator | Sunday 04 May 2025 00:42:30 +0000 (0:00:01.870) 0:00:38.836 ************ 2025-05-04 00:42:30.962362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:30.962554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:30.963913 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:30.964120 | orchestrator | 2025-05-04 00:42:30.964407 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-04 00:42:30.964978 | orchestrator | Sunday 04 May 2025 00:42:30 +0000 (0:00:00.172) 0:00:39.009 ************ 2025-05-04 00:42:32.286718 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}) 2025-05-04 00:42:32.287720 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'}) 2025-05-04 00:42:32.288099 | orchestrator | 2025-05-04 00:42:32.288852 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-04 00:42:32.289677 | orchestrator | Sunday 04 May 2025 00:42:32 +0000 (0:00:01.323) 0:00:40.332 ************ 2025-05-04 00:42:32.480088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:32.481816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:32.482866 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:32.484963 | orchestrator | 2025-05-04 00:42:32.625733 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-04 00:42:32.625907 | orchestrator | Sunday 04 May 2025 00:42:32 +0000 (0:00:00.194) 0:00:40.526 ************ 2025-05-04 00:42:32.625945 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:32.626576 | orchestrator | 2025-05-04 00:42:32.627343 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-04 00:42:32.628215 | orchestrator | Sunday 04 May 2025 00:42:32 +0000 (0:00:00.145) 0:00:40.672 ************ 2025-05-04 00:42:32.968231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:32.970973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:32.973221 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:32.974442 | orchestrator | 2025-05-04 00:42:32.975579 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-04 00:42:32.976231 | orchestrator | Sunday 04 May 2025 00:42:32 +0000 (0:00:00.341) 0:00:41.014 ************ 2025-05-04 00:42:33.121463 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:33.121954 | orchestrator | 2025-05-04 00:42:33.123159 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-04 00:42:33.123941 | orchestrator | Sunday 04 May 2025 00:42:33 +0000 (0:00:00.154) 0:00:41.168 ************ 2025-05-04 00:42:33.308145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:33.309039 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:33.309106 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:33.309135 | orchestrator | 2025-05-04 00:42:33.309172 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-04 00:42:33.446273 | orchestrator | Sunday 04 May 2025 00:42:33 +0000 (0:00:00.187) 0:00:41.356 ************ 2025-05-04 00:42:33.446414 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:33.447309 | orchestrator | 2025-05-04 00:42:33.448126 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-04 00:42:33.448833 | orchestrator | Sunday 04 May 2025 00:42:33 +0000 (0:00:00.137) 0:00:41.493 ************ 2025-05-04 00:42:33.608162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:33.608928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:33.609946 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:33.610953 | orchestrator | 2025-05-04 00:42:33.613351 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-04 00:42:33.748930 | orchestrator | Sunday 04 May 2025 00:42:33 +0000 (0:00:00.161) 0:00:41.655 ************ 2025-05-04 00:42:33.749070 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:33.749920 | orchestrator | 2025-05-04 00:42:33.750472 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-04 00:42:33.751418 | orchestrator | Sunday 04 May 2025 00:42:33 +0000 (0:00:00.140) 0:00:41.795 ************ 2025-05-04 00:42:33.919738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:33.920463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:33.921909 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:33.923366 | orchestrator | 2025-05-04 00:42:33.924632 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-04 00:42:33.924722 | orchestrator | Sunday 04 May 2025 00:42:33 +0000 (0:00:00.170) 0:00:41.965 ************ 2025-05-04 00:42:34.102362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:34.103457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:34.107808 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:34.108441 | orchestrator | 2025-05-04 00:42:34.109277 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-04 00:42:34.109962 | orchestrator | Sunday 04 May 2025 00:42:34 +0000 (0:00:00.183) 0:00:42.149 ************ 2025-05-04 00:42:34.275078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:34.275298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:34.275378 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:34.275407 | orchestrator | 2025-05-04 00:42:34.277060 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-04 00:42:34.277443 | orchestrator | Sunday 04 May 2025 00:42:34 +0000 (0:00:00.172) 0:00:42.321 ************ 2025-05-04 00:42:34.435666 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:34.437116 | orchestrator | 2025-05-04 00:42:34.437567 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-04 00:42:34.438226 | orchestrator | Sunday 04 May 2025 00:42:34 +0000 (0:00:00.160) 0:00:42.482 ************ 2025-05-04 00:42:34.597393 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:34.597695 | orchestrator | 2025-05-04 00:42:34.598380 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-04 00:42:34.598943 | orchestrator | Sunday 04 May 2025 00:42:34 +0000 (0:00:00.161) 0:00:42.644 ************ 2025-05-04 00:42:34.737200 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:34.737404 | orchestrator | 2025-05-04 00:42:34.737723 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-04 00:42:34.738514 | orchestrator | Sunday 04 May 2025 00:42:34 +0000 (0:00:00.139) 0:00:42.783 ************ 2025-05-04 00:42:35.108079 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 00:42:35.109229 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-04 00:42:35.109462 | orchestrator | } 2025-05-04 00:42:35.109898 | orchestrator | 2025-05-04 00:42:35.110379 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-04 00:42:35.110626 | orchestrator | Sunday 04 May 2025 00:42:35 +0000 (0:00:00.370) 0:00:43.154 ************ 2025-05-04 00:42:35.254892 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 00:42:35.255248 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-04 00:42:35.255268 | orchestrator | } 2025-05-04 00:42:35.255640 | orchestrator | 2025-05-04 00:42:35.256090 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-04 00:42:35.256458 | orchestrator | Sunday 04 May 2025 00:42:35 +0000 (0:00:00.146) 0:00:43.301 ************ 2025-05-04 00:42:35.394139 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 00:42:35.394446 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-04 00:42:35.395352 | orchestrator | } 2025-05-04 00:42:35.396005 | orchestrator | 2025-05-04 00:42:35.396752 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-04 00:42:35.397175 | orchestrator | Sunday 04 May 2025 00:42:35 +0000 (0:00:00.139) 0:00:43.440 ************ 2025-05-04 00:42:35.912033 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:35.912289 | orchestrator | 2025-05-04 00:42:35.913019 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-04 00:42:35.913444 | orchestrator | Sunday 04 May 2025 00:42:35 +0000 (0:00:00.517) 0:00:43.958 ************ 2025-05-04 00:42:36.429200 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:36.429471 | orchestrator | 2025-05-04 00:42:36.430583 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-04 00:42:36.434805 | orchestrator | Sunday 04 May 2025 00:42:36 +0000 (0:00:00.516) 0:00:44.474 ************ 2025-05-04 00:42:36.958947 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:36.959588 | orchestrator | 2025-05-04 00:42:36.960366 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-04 00:42:36.966488 | orchestrator | Sunday 04 May 2025 00:42:36 +0000 (0:00:00.529) 0:00:45.004 ************ 2025-05-04 00:42:37.106656 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:37.107865 | orchestrator | 2025-05-04 00:42:37.109415 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-04 00:42:37.109615 | orchestrator | Sunday 04 May 2025 00:42:37 +0000 (0:00:00.149) 0:00:45.153 ************ 2025-05-04 00:42:37.224487 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:37.229113 | orchestrator | 2025-05-04 00:42:37.231268 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-04 00:42:37.232537 | orchestrator | Sunday 04 May 2025 00:42:37 +0000 (0:00:00.115) 0:00:45.269 ************ 2025-05-04 00:42:37.360451 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:37.361465 | orchestrator | 2025-05-04 00:42:37.362376 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-04 00:42:37.363089 | orchestrator | Sunday 04 May 2025 00:42:37 +0000 (0:00:00.137) 0:00:45.406 ************ 2025-05-04 00:42:37.511941 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 00:42:37.512445 | orchestrator |  "vgs_report": { 2025-05-04 00:42:37.512953 | orchestrator |  "vg": [] 2025-05-04 00:42:37.513937 | orchestrator |  } 2025-05-04 00:42:37.518007 | orchestrator | } 2025-05-04 00:42:37.519385 | orchestrator | 2025-05-04 00:42:37.520687 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-04 00:42:37.521124 | orchestrator | Sunday 04 May 2025 00:42:37 +0000 (0:00:00.151) 0:00:45.558 ************ 2025-05-04 00:42:37.850302 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:37.851263 | orchestrator | 2025-05-04 00:42:37.852232 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-04 00:42:37.856192 | orchestrator | Sunday 04 May 2025 00:42:37 +0000 (0:00:00.338) 0:00:45.896 ************ 2025-05-04 00:42:37.984443 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:37.985540 | orchestrator | 2025-05-04 00:42:37.989433 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-04 00:42:38.144617 | orchestrator | Sunday 04 May 2025 00:42:37 +0000 (0:00:00.131) 0:00:46.028 ************ 2025-05-04 00:42:38.144945 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:38.145054 | orchestrator | 2025-05-04 00:42:38.146385 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-04 00:42:38.147032 | orchestrator | Sunday 04 May 2025 00:42:38 +0000 (0:00:00.162) 0:00:46.190 ************ 2025-05-04 00:42:38.298686 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:38.299230 | orchestrator | 2025-05-04 00:42:38.300741 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-04 00:42:38.301906 | orchestrator | Sunday 04 May 2025 00:42:38 +0000 (0:00:00.154) 0:00:46.345 ************ 2025-05-04 00:42:38.453727 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:38.454668 | orchestrator | 2025-05-04 00:42:38.455883 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-04 00:42:38.457016 | orchestrator | Sunday 04 May 2025 00:42:38 +0000 (0:00:00.154) 0:00:46.500 ************ 2025-05-04 00:42:38.591757 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:38.592168 | orchestrator | 2025-05-04 00:42:38.593423 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-04 00:42:38.596855 | orchestrator | Sunday 04 May 2025 00:42:38 +0000 (0:00:00.137) 0:00:46.638 ************ 2025-05-04 00:42:38.755762 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:38.756594 | orchestrator | 2025-05-04 00:42:38.757119 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-04 00:42:38.758148 | orchestrator | Sunday 04 May 2025 00:42:38 +0000 (0:00:00.164) 0:00:46.802 ************ 2025-05-04 00:42:38.905265 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:38.905471 | orchestrator | 2025-05-04 00:42:38.909730 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-04 00:42:38.910314 | orchestrator | Sunday 04 May 2025 00:42:38 +0000 (0:00:00.147) 0:00:46.949 ************ 2025-05-04 00:42:39.051363 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:39.052334 | orchestrator | 2025-05-04 00:42:39.053374 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-04 00:42:39.054167 | orchestrator | Sunday 04 May 2025 00:42:39 +0000 (0:00:00.146) 0:00:47.096 ************ 2025-05-04 00:42:39.196920 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:39.197607 | orchestrator | 2025-05-04 00:42:39.198738 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-04 00:42:39.201797 | orchestrator | Sunday 04 May 2025 00:42:39 +0000 (0:00:00.146) 0:00:47.243 ************ 2025-05-04 00:42:39.347870 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:39.348873 | orchestrator | 2025-05-04 00:42:39.349874 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-04 00:42:39.350705 | orchestrator | Sunday 04 May 2025 00:42:39 +0000 (0:00:00.151) 0:00:47.394 ************ 2025-05-04 00:42:39.485525 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:39.486162 | orchestrator | 2025-05-04 00:42:39.486211 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-04 00:42:39.487394 | orchestrator | Sunday 04 May 2025 00:42:39 +0000 (0:00:00.135) 0:00:47.530 ************ 2025-05-04 00:42:39.909932 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:39.910545 | orchestrator | 2025-05-04 00:42:39.911558 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-04 00:42:39.912044 | orchestrator | Sunday 04 May 2025 00:42:39 +0000 (0:00:00.425) 0:00:47.956 ************ 2025-05-04 00:42:40.080609 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:40.080937 | orchestrator | 2025-05-04 00:42:40.081387 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-04 00:42:40.082412 | orchestrator | Sunday 04 May 2025 00:42:40 +0000 (0:00:00.164) 0:00:48.121 ************ 2025-05-04 00:42:40.270231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:40.271663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:40.271706 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:40.273380 | orchestrator | 2025-05-04 00:42:40.274850 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-04 00:42:40.275223 | orchestrator | Sunday 04 May 2025 00:42:40 +0000 (0:00:00.194) 0:00:48.315 ************ 2025-05-04 00:42:40.436792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:40.437012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:40.438591 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:40.442825 | orchestrator | 2025-05-04 00:42:40.605565 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-04 00:42:40.605690 | orchestrator | Sunday 04 May 2025 00:42:40 +0000 (0:00:00.166) 0:00:48.482 ************ 2025-05-04 00:42:40.605744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:40.606614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:40.609230 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:40.610344 | orchestrator | 2025-05-04 00:42:40.610396 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-04 00:42:40.611658 | orchestrator | Sunday 04 May 2025 00:42:40 +0000 (0:00:00.169) 0:00:48.652 ************ 2025-05-04 00:42:40.769710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:40.770522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:40.774165 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:40.774332 | orchestrator | 2025-05-04 00:42:40.774357 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-04 00:42:40.774396 | orchestrator | Sunday 04 May 2025 00:42:40 +0000 (0:00:00.163) 0:00:48.815 ************ 2025-05-04 00:42:40.946366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:40.949792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:40.950762 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:40.950832 | orchestrator | 2025-05-04 00:42:40.952087 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-04 00:42:40.952721 | orchestrator | Sunday 04 May 2025 00:42:40 +0000 (0:00:00.176) 0:00:48.992 ************ 2025-05-04 00:42:41.119411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:41.119615 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:41.119921 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:41.124462 | orchestrator | 2025-05-04 00:42:41.124740 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-04 00:42:41.124805 | orchestrator | Sunday 04 May 2025 00:42:41 +0000 (0:00:00.173) 0:00:49.165 ************ 2025-05-04 00:42:41.285150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:41.285335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:41.286333 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:41.287317 | orchestrator | 2025-05-04 00:42:41.291141 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-04 00:42:41.291969 | orchestrator | Sunday 04 May 2025 00:42:41 +0000 (0:00:00.165) 0:00:49.330 ************ 2025-05-04 00:42:41.459527 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:41.459710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:41.460489 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:41.460702 | orchestrator | 2025-05-04 00:42:41.461040 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-04 00:42:41.461388 | orchestrator | Sunday 04 May 2025 00:42:41 +0000 (0:00:00.176) 0:00:49.507 ************ 2025-05-04 00:42:41.964964 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:41.965150 | orchestrator | 2025-05-04 00:42:41.968202 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-04 00:42:41.968341 | orchestrator | Sunday 04 May 2025 00:42:41 +0000 (0:00:00.504) 0:00:50.012 ************ 2025-05-04 00:42:42.738412 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:42.739006 | orchestrator | 2025-05-04 00:42:42.739082 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-04 00:42:42.739957 | orchestrator | Sunday 04 May 2025 00:42:42 +0000 (0:00:00.770) 0:00:50.783 ************ 2025-05-04 00:42:42.901325 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:42:42.901572 | orchestrator | 2025-05-04 00:42:42.901611 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-04 00:42:42.902549 | orchestrator | Sunday 04 May 2025 00:42:42 +0000 (0:00:00.164) 0:00:50.947 ************ 2025-05-04 00:42:43.099167 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'vg_name': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}) 2025-05-04 00:42:43.099946 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'vg_name': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'}) 2025-05-04 00:42:43.100576 | orchestrator | 2025-05-04 00:42:43.101476 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-04 00:42:43.105063 | orchestrator | Sunday 04 May 2025 00:42:43 +0000 (0:00:00.197) 0:00:51.145 ************ 2025-05-04 00:42:43.278737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:43.279006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:43.279690 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:43.280433 | orchestrator | 2025-05-04 00:42:43.280992 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-04 00:42:43.281834 | orchestrator | Sunday 04 May 2025 00:42:43 +0000 (0:00:00.179) 0:00:51.325 ************ 2025-05-04 00:42:43.456579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:43.457995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:43.458718 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:43.462241 | orchestrator | 2025-05-04 00:42:43.625630 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-04 00:42:43.625755 | orchestrator | Sunday 04 May 2025 00:42:43 +0000 (0:00:00.178) 0:00:51.503 ************ 2025-05-04 00:42:43.625849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'})  2025-05-04 00:42:43.625956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'})  2025-05-04 00:42:43.627329 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:42:43.627852 | orchestrator | 2025-05-04 00:42:43.628763 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-04 00:42:43.632360 | orchestrator | Sunday 04 May 2025 00:42:43 +0000 (0:00:00.168) 0:00:51.671 ************ 2025-05-04 00:42:44.627409 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 00:42:44.627940 | orchestrator |  "lvm_report": { 2025-05-04 00:42:44.629049 | orchestrator |  "lv": [ 2025-05-04 00:42:44.633019 | orchestrator |  { 2025-05-04 00:42:44.633572 | orchestrator |  "lv_name": "osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f", 2025-05-04 00:42:44.634456 | orchestrator |  "vg_name": "ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f" 2025-05-04 00:42:44.635161 | orchestrator |  }, 2025-05-04 00:42:44.635943 | orchestrator |  { 2025-05-04 00:42:44.636902 | orchestrator |  "lv_name": "osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde", 2025-05-04 00:42:44.637514 | orchestrator |  "vg_name": "ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde" 2025-05-04 00:42:44.639890 | orchestrator |  } 2025-05-04 00:42:44.640643 | orchestrator |  ], 2025-05-04 00:42:44.641135 | orchestrator |  "pv": [ 2025-05-04 00:42:44.642074 | orchestrator |  { 2025-05-04 00:42:44.645594 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-04 00:42:44.646637 | orchestrator |  "vg_name": "ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f" 2025-05-04 00:42:44.647198 | orchestrator |  }, 2025-05-04 00:42:44.647976 | orchestrator |  { 2025-05-04 00:42:44.648974 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-04 00:42:44.649818 | orchestrator |  "vg_name": "ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde" 2025-05-04 00:42:44.652097 | orchestrator |  } 2025-05-04 00:42:44.656046 | orchestrator |  ] 2025-05-04 00:42:44.656324 | orchestrator |  } 2025-05-04 00:42:44.657838 | orchestrator | } 2025-05-04 00:42:44.658729 | orchestrator | 2025-05-04 00:42:44.659068 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-04 00:42:44.659410 | orchestrator | 2025-05-04 00:42:44.659982 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-04 00:42:44.661544 | orchestrator | Sunday 04 May 2025 00:42:44 +0000 (0:00:00.998) 0:00:52.670 ************ 2025-05-04 00:42:44.910996 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-04 00:42:44.911232 | orchestrator | 2025-05-04 00:42:44.911848 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-04 00:42:44.912534 | orchestrator | Sunday 04 May 2025 00:42:44 +0000 (0:00:00.286) 0:00:52.957 ************ 2025-05-04 00:42:45.149978 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:42:45.150227 | orchestrator | 2025-05-04 00:42:45.151819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:45.155974 | orchestrator | Sunday 04 May 2025 00:42:45 +0000 (0:00:00.238) 0:00:53.195 ************ 2025-05-04 00:42:45.630505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-04 00:42:45.631364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-04 00:42:45.633073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-04 00:42:45.636647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-04 00:42:45.637330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-04 00:42:45.638812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-04 00:42:45.639469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-04 00:42:45.639498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-04 00:42:45.640035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-04 00:42:45.642533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-04 00:42:45.643303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-04 00:42:45.644078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-04 00:42:45.645833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-04 00:42:45.646420 | orchestrator | 2025-05-04 00:42:45.646739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:45.647265 | orchestrator | Sunday 04 May 2025 00:42:45 +0000 (0:00:00.480) 0:00:53.676 ************ 2025-05-04 00:42:45.844279 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:45.846368 | orchestrator | 2025-05-04 00:42:45.847096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:45.847413 | orchestrator | Sunday 04 May 2025 00:42:45 +0000 (0:00:00.214) 0:00:53.891 ************ 2025-05-04 00:42:46.068070 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:46.069186 | orchestrator | 2025-05-04 00:42:46.070111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:46.072905 | orchestrator | Sunday 04 May 2025 00:42:46 +0000 (0:00:00.223) 0:00:54.114 ************ 2025-05-04 00:42:46.267652 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:46.270147 | orchestrator | 2025-05-04 00:42:46.270192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:46.270218 | orchestrator | Sunday 04 May 2025 00:42:46 +0000 (0:00:00.189) 0:00:54.304 ************ 2025-05-04 00:42:46.477299 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:46.477917 | orchestrator | 2025-05-04 00:42:46.478265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:46.479816 | orchestrator | Sunday 04 May 2025 00:42:46 +0000 (0:00:00.219) 0:00:54.524 ************ 2025-05-04 00:42:46.968332 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:46.968957 | orchestrator | 2025-05-04 00:42:46.969567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:46.970327 | orchestrator | Sunday 04 May 2025 00:42:46 +0000 (0:00:00.491) 0:00:55.015 ************ 2025-05-04 00:42:47.167387 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:47.168947 | orchestrator | 2025-05-04 00:42:47.394594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:47.394716 | orchestrator | Sunday 04 May 2025 00:42:47 +0000 (0:00:00.196) 0:00:55.212 ************ 2025-05-04 00:42:47.394751 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:47.394867 | orchestrator | 2025-05-04 00:42:47.394893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:47.395427 | orchestrator | Sunday 04 May 2025 00:42:47 +0000 (0:00:00.229) 0:00:55.442 ************ 2025-05-04 00:42:47.596915 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:47.597226 | orchestrator | 2025-05-04 00:42:47.597620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:47.598459 | orchestrator | Sunday 04 May 2025 00:42:47 +0000 (0:00:00.202) 0:00:55.644 ************ 2025-05-04 00:42:48.030516 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9) 2025-05-04 00:42:48.031183 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9) 2025-05-04 00:42:48.032248 | orchestrator | 2025-05-04 00:42:48.032804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:48.033534 | orchestrator | Sunday 04 May 2025 00:42:48 +0000 (0:00:00.432) 0:00:56.077 ************ 2025-05-04 00:42:48.495993 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d) 2025-05-04 00:42:48.496248 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d) 2025-05-04 00:42:48.496286 | orchestrator | 2025-05-04 00:42:48.496940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:48.497429 | orchestrator | Sunday 04 May 2025 00:42:48 +0000 (0:00:00.460) 0:00:56.537 ************ 2025-05-04 00:42:48.932112 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783) 2025-05-04 00:42:48.932305 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783) 2025-05-04 00:42:48.932629 | orchestrator | 2025-05-04 00:42:48.933454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:48.934181 | orchestrator | Sunday 04 May 2025 00:42:48 +0000 (0:00:00.441) 0:00:56.979 ************ 2025-05-04 00:42:49.371096 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123) 2025-05-04 00:42:49.372487 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123) 2025-05-04 00:42:49.372554 | orchestrator | 2025-05-04 00:42:49.373441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-04 00:42:49.374214 | orchestrator | Sunday 04 May 2025 00:42:49 +0000 (0:00:00.437) 0:00:57.416 ************ 2025-05-04 00:42:49.710372 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-04 00:42:49.711417 | orchestrator | 2025-05-04 00:42:49.712306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:49.713334 | orchestrator | Sunday 04 May 2025 00:42:49 +0000 (0:00:00.339) 0:00:57.756 ************ 2025-05-04 00:42:50.385502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-04 00:42:50.385879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-04 00:42:50.385921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-04 00:42:50.389892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-04 00:42:50.390268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-04 00:42:50.390307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-04 00:42:50.393683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-04 00:42:50.393967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-04 00:42:50.393994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-04 00:42:50.394009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-04 00:42:50.394079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-04 00:42:50.394101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-04 00:42:50.394521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-04 00:42:50.394761 | orchestrator | 2025-05-04 00:42:50.395150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:50.395531 | orchestrator | Sunday 04 May 2025 00:42:50 +0000 (0:00:00.673) 0:00:58.429 ************ 2025-05-04 00:42:50.616257 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:50.616457 | orchestrator | 2025-05-04 00:42:50.616486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:50.616865 | orchestrator | Sunday 04 May 2025 00:42:50 +0000 (0:00:00.234) 0:00:58.664 ************ 2025-05-04 00:42:50.833002 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:50.834100 | orchestrator | 2025-05-04 00:42:50.834141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:50.835180 | orchestrator | Sunday 04 May 2025 00:42:50 +0000 (0:00:00.214) 0:00:58.879 ************ 2025-05-04 00:42:51.030738 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:51.031071 | orchestrator | 2025-05-04 00:42:51.031840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:51.032345 | orchestrator | Sunday 04 May 2025 00:42:51 +0000 (0:00:00.197) 0:00:59.077 ************ 2025-05-04 00:42:51.236607 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:51.236857 | orchestrator | 2025-05-04 00:42:51.237741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:51.238758 | orchestrator | Sunday 04 May 2025 00:42:51 +0000 (0:00:00.205) 0:00:59.283 ************ 2025-05-04 00:42:51.435647 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:51.435948 | orchestrator | 2025-05-04 00:42:51.436530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:51.437303 | orchestrator | Sunday 04 May 2025 00:42:51 +0000 (0:00:00.198) 0:00:59.482 ************ 2025-05-04 00:42:51.642284 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:51.642946 | orchestrator | 2025-05-04 00:42:51.643143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:51.643725 | orchestrator | Sunday 04 May 2025 00:42:51 +0000 (0:00:00.205) 0:00:59.688 ************ 2025-05-04 00:42:51.850711 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:51.851016 | orchestrator | 2025-05-04 00:42:51.851350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:51.851842 | orchestrator | Sunday 04 May 2025 00:42:51 +0000 (0:00:00.209) 0:00:59.897 ************ 2025-05-04 00:42:52.067869 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:52.069207 | orchestrator | 2025-05-04 00:42:52.070005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:52.070615 | orchestrator | Sunday 04 May 2025 00:42:52 +0000 (0:00:00.216) 0:01:00.114 ************ 2025-05-04 00:42:53.072447 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-04 00:42:53.074408 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-04 00:42:53.074817 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-04 00:42:53.075656 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-04 00:42:53.075682 | orchestrator | 2025-05-04 00:42:53.075700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:53.075748 | orchestrator | Sunday 04 May 2025 00:42:53 +0000 (0:00:01.001) 0:01:01.116 ************ 2025-05-04 00:42:53.695235 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:53.695423 | orchestrator | 2025-05-04 00:42:53.696644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:53.696712 | orchestrator | Sunday 04 May 2025 00:42:53 +0000 (0:00:00.624) 0:01:01.741 ************ 2025-05-04 00:42:53.905250 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:53.906262 | orchestrator | 2025-05-04 00:42:53.907624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:53.909984 | orchestrator | Sunday 04 May 2025 00:42:53 +0000 (0:00:00.210) 0:01:01.951 ************ 2025-05-04 00:42:54.126001 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:54.126861 | orchestrator | 2025-05-04 00:42:54.127918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-04 00:42:54.131129 | orchestrator | Sunday 04 May 2025 00:42:54 +0000 (0:00:00.220) 0:01:02.172 ************ 2025-05-04 00:42:54.349162 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:54.350105 | orchestrator | 2025-05-04 00:42:54.350691 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-04 00:42:54.352057 | orchestrator | Sunday 04 May 2025 00:42:54 +0000 (0:00:00.223) 0:01:02.395 ************ 2025-05-04 00:42:54.499507 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:54.500576 | orchestrator | 2025-05-04 00:42:54.501443 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-04 00:42:54.502366 | orchestrator | Sunday 04 May 2025 00:42:54 +0000 (0:00:00.150) 0:01:02.545 ************ 2025-05-04 00:42:54.736152 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98453abf-c748-514f-aec7-544322a7c940'}}) 2025-05-04 00:42:54.736350 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f54bf35c-9381-504c-8591-afe4d3e61469'}}) 2025-05-04 00:42:54.737359 | orchestrator | 2025-05-04 00:42:54.738473 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-04 00:42:54.738917 | orchestrator | Sunday 04 May 2025 00:42:54 +0000 (0:00:00.233) 0:01:02.779 ************ 2025-05-04 00:42:56.489367 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'}) 2025-05-04 00:42:56.491155 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'}) 2025-05-04 00:42:56.492273 | orchestrator | 2025-05-04 00:42:56.492855 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-04 00:42:56.493534 | orchestrator | Sunday 04 May 2025 00:42:56 +0000 (0:00:01.755) 0:01:04.534 ************ 2025-05-04 00:42:56.663527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:56.665645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:56.666941 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:56.667888 | orchestrator | 2025-05-04 00:42:56.668581 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-04 00:42:56.670660 | orchestrator | Sunday 04 May 2025 00:42:56 +0000 (0:00:00.174) 0:01:04.708 ************ 2025-05-04 00:42:57.939440 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'}) 2025-05-04 00:42:57.940950 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'}) 2025-05-04 00:42:57.942899 | orchestrator | 2025-05-04 00:42:58.346663 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-04 00:42:58.346841 | orchestrator | Sunday 04 May 2025 00:42:57 +0000 (0:00:01.275) 0:01:05.984 ************ 2025-05-04 00:42:58.346879 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:58.348050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:58.348592 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:58.349124 | orchestrator | 2025-05-04 00:42:58.350109 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-04 00:42:58.498863 | orchestrator | Sunday 04 May 2025 00:42:58 +0000 (0:00:00.409) 0:01:06.394 ************ 2025-05-04 00:42:58.498991 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:58.499630 | orchestrator | 2025-05-04 00:42:58.500482 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-04 00:42:58.501201 | orchestrator | Sunday 04 May 2025 00:42:58 +0000 (0:00:00.151) 0:01:06.546 ************ 2025-05-04 00:42:58.683097 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:58.683958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:58.684999 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:58.685922 | orchestrator | 2025-05-04 00:42:58.686896 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-04 00:42:58.687542 | orchestrator | Sunday 04 May 2025 00:42:58 +0000 (0:00:00.182) 0:01:06.728 ************ 2025-05-04 00:42:58.839400 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:58.843226 | orchestrator | 2025-05-04 00:42:58.843669 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-04 00:42:58.846138 | orchestrator | Sunday 04 May 2025 00:42:58 +0000 (0:00:00.157) 0:01:06.885 ************ 2025-05-04 00:42:59.013318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:59.015838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:59.016883 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:59.017504 | orchestrator | 2025-05-04 00:42:59.018846 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-04 00:42:59.019268 | orchestrator | Sunday 04 May 2025 00:42:59 +0000 (0:00:00.173) 0:01:07.059 ************ 2025-05-04 00:42:59.169707 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:59.170349 | orchestrator | 2025-05-04 00:42:59.170391 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-04 00:42:59.171106 | orchestrator | Sunday 04 May 2025 00:42:59 +0000 (0:00:00.156) 0:01:07.216 ************ 2025-05-04 00:42:59.339721 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:59.339968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:59.339996 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:59.341442 | orchestrator | 2025-05-04 00:42:59.341479 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-04 00:42:59.341719 | orchestrator | Sunday 04 May 2025 00:42:59 +0000 (0:00:00.171) 0:01:07.387 ************ 2025-05-04 00:42:59.495672 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:42:59.496812 | orchestrator | 2025-05-04 00:42:59.497398 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-04 00:42:59.498075 | orchestrator | Sunday 04 May 2025 00:42:59 +0000 (0:00:00.155) 0:01:07.542 ************ 2025-05-04 00:42:59.663666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:59.663961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:59.664695 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:59.665824 | orchestrator | 2025-05-04 00:42:59.666525 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-04 00:42:59.667269 | orchestrator | Sunday 04 May 2025 00:42:59 +0000 (0:00:00.166) 0:01:07.709 ************ 2025-05-04 00:42:59.851203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:42:59.851596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:42:59.852151 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:42:59.852623 | orchestrator | 2025-05-04 00:42:59.854271 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-04 00:42:59.855119 | orchestrator | Sunday 04 May 2025 00:42:59 +0000 (0:00:00.189) 0:01:07.898 ************ 2025-05-04 00:43:00.009319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:00.009849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:00.010568 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:00.012600 | orchestrator | 2025-05-04 00:43:00.015735 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-04 00:43:00.016440 | orchestrator | Sunday 04 May 2025 00:43:00 +0000 (0:00:00.157) 0:01:08.056 ************ 2025-05-04 00:43:00.389764 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:00.390132 | orchestrator | 2025-05-04 00:43:00.391115 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-04 00:43:00.392264 | orchestrator | Sunday 04 May 2025 00:43:00 +0000 (0:00:00.380) 0:01:08.436 ************ 2025-05-04 00:43:00.533848 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:00.534098 | orchestrator | 2025-05-04 00:43:00.535657 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-04 00:43:00.535971 | orchestrator | Sunday 04 May 2025 00:43:00 +0000 (0:00:00.143) 0:01:08.580 ************ 2025-05-04 00:43:00.677364 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:00.677829 | orchestrator | 2025-05-04 00:43:00.678943 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-04 00:43:00.679563 | orchestrator | Sunday 04 May 2025 00:43:00 +0000 (0:00:00.142) 0:01:08.723 ************ 2025-05-04 00:43:00.826385 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 00:43:00.827350 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-04 00:43:00.828002 | orchestrator | } 2025-05-04 00:43:00.828235 | orchestrator | 2025-05-04 00:43:00.829013 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-04 00:43:00.829526 | orchestrator | Sunday 04 May 2025 00:43:00 +0000 (0:00:00.149) 0:01:08.873 ************ 2025-05-04 00:43:00.982138 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 00:43:00.982717 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-04 00:43:00.984049 | orchestrator | } 2025-05-04 00:43:00.984287 | orchestrator | 2025-05-04 00:43:00.984868 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-04 00:43:00.985895 | orchestrator | Sunday 04 May 2025 00:43:00 +0000 (0:00:00.154) 0:01:09.028 ************ 2025-05-04 00:43:01.133062 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 00:43:01.133687 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-04 00:43:01.134139 | orchestrator | } 2025-05-04 00:43:01.134600 | orchestrator | 2025-05-04 00:43:01.135299 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-04 00:43:01.136281 | orchestrator | Sunday 04 May 2025 00:43:01 +0000 (0:00:00.151) 0:01:09.179 ************ 2025-05-04 00:43:01.649691 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:01.649951 | orchestrator | 2025-05-04 00:43:01.650247 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-04 00:43:01.650818 | orchestrator | Sunday 04 May 2025 00:43:01 +0000 (0:00:00.515) 0:01:09.695 ************ 2025-05-04 00:43:02.168354 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:02.169939 | orchestrator | 2025-05-04 00:43:02.171666 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-04 00:43:02.172470 | orchestrator | Sunday 04 May 2025 00:43:02 +0000 (0:00:00.519) 0:01:10.214 ************ 2025-05-04 00:43:02.669353 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:02.669884 | orchestrator | 2025-05-04 00:43:02.801126 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-04 00:43:02.801266 | orchestrator | Sunday 04 May 2025 00:43:02 +0000 (0:00:00.501) 0:01:10.716 ************ 2025-05-04 00:43:02.801302 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:02.801383 | orchestrator | 2025-05-04 00:43:02.801508 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-04 00:43:02.802318 | orchestrator | Sunday 04 May 2025 00:43:02 +0000 (0:00:00.132) 0:01:10.848 ************ 2025-05-04 00:43:02.921982 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:02.922225 | orchestrator | 2025-05-04 00:43:02.923855 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-04 00:43:02.924667 | orchestrator | Sunday 04 May 2025 00:43:02 +0000 (0:00:00.114) 0:01:10.963 ************ 2025-05-04 00:43:03.240040 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:03.240420 | orchestrator | 2025-05-04 00:43:03.240968 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-04 00:43:03.241531 | orchestrator | Sunday 04 May 2025 00:43:03 +0000 (0:00:00.322) 0:01:11.286 ************ 2025-05-04 00:43:03.389299 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 00:43:03.389491 | orchestrator |  "vgs_report": { 2025-05-04 00:43:03.389533 | orchestrator |  "vg": [] 2025-05-04 00:43:03.392550 | orchestrator |  } 2025-05-04 00:43:03.392645 | orchestrator | } 2025-05-04 00:43:03.392665 | orchestrator | 2025-05-04 00:43:03.392681 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-04 00:43:03.392702 | orchestrator | Sunday 04 May 2025 00:43:03 +0000 (0:00:00.146) 0:01:11.433 ************ 2025-05-04 00:43:03.528634 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:03.528856 | orchestrator | 2025-05-04 00:43:03.528906 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-04 00:43:03.528967 | orchestrator | Sunday 04 May 2025 00:43:03 +0000 (0:00:00.142) 0:01:11.575 ************ 2025-05-04 00:43:03.663685 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:03.663887 | orchestrator | 2025-05-04 00:43:03.664665 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-04 00:43:03.665412 | orchestrator | Sunday 04 May 2025 00:43:03 +0000 (0:00:00.135) 0:01:11.711 ************ 2025-05-04 00:43:03.802968 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:03.803589 | orchestrator | 2025-05-04 00:43:03.804066 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-04 00:43:03.804829 | orchestrator | Sunday 04 May 2025 00:43:03 +0000 (0:00:00.133) 0:01:11.844 ************ 2025-05-04 00:43:03.944378 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:03.944593 | orchestrator | 2025-05-04 00:43:03.944737 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-04 00:43:04.104056 | orchestrator | Sunday 04 May 2025 00:43:03 +0000 (0:00:00.146) 0:01:11.991 ************ 2025-05-04 00:43:04.104197 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:04.104825 | orchestrator | 2025-05-04 00:43:04.105306 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-04 00:43:04.107580 | orchestrator | Sunday 04 May 2025 00:43:04 +0000 (0:00:00.158) 0:01:12.150 ************ 2025-05-04 00:43:04.255048 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:04.255262 | orchestrator | 2025-05-04 00:43:04.256874 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-04 00:43:04.257080 | orchestrator | Sunday 04 May 2025 00:43:04 +0000 (0:00:00.151) 0:01:12.301 ************ 2025-05-04 00:43:04.400076 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:04.400308 | orchestrator | 2025-05-04 00:43:04.400668 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-04 00:43:04.403421 | orchestrator | Sunday 04 May 2025 00:43:04 +0000 (0:00:00.141) 0:01:12.443 ************ 2025-05-04 00:43:04.548170 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:04.548503 | orchestrator | 2025-05-04 00:43:04.548912 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-04 00:43:04.549712 | orchestrator | Sunday 04 May 2025 00:43:04 +0000 (0:00:00.151) 0:01:12.594 ************ 2025-05-04 00:43:04.700370 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:04.701304 | orchestrator | 2025-05-04 00:43:04.701605 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-04 00:43:04.704562 | orchestrator | Sunday 04 May 2025 00:43:04 +0000 (0:00:00.151) 0:01:12.746 ************ 2025-05-04 00:43:04.876896 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:04.877493 | orchestrator | 2025-05-04 00:43:04.877539 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-04 00:43:04.878344 | orchestrator | Sunday 04 May 2025 00:43:04 +0000 (0:00:00.177) 0:01:12.923 ************ 2025-05-04 00:43:05.247877 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:05.248093 | orchestrator | 2025-05-04 00:43:05.249292 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-04 00:43:05.250732 | orchestrator | Sunday 04 May 2025 00:43:05 +0000 (0:00:00.371) 0:01:13.294 ************ 2025-05-04 00:43:05.437185 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:05.438825 | orchestrator | 2025-05-04 00:43:05.439430 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-04 00:43:05.439575 | orchestrator | Sunday 04 May 2025 00:43:05 +0000 (0:00:00.189) 0:01:13.484 ************ 2025-05-04 00:43:05.606854 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:05.607091 | orchestrator | 2025-05-04 00:43:05.607566 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-04 00:43:05.607864 | orchestrator | Sunday 04 May 2025 00:43:05 +0000 (0:00:00.168) 0:01:13.653 ************ 2025-05-04 00:43:05.765464 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:05.766893 | orchestrator | 2025-05-04 00:43:05.767534 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-04 00:43:05.768598 | orchestrator | Sunday 04 May 2025 00:43:05 +0000 (0:00:00.157) 0:01:13.811 ************ 2025-05-04 00:43:05.938491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:05.938876 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:05.941823 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:05.943013 | orchestrator | 2025-05-04 00:43:05.944229 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-04 00:43:05.944724 | orchestrator | Sunday 04 May 2025 00:43:05 +0000 (0:00:00.172) 0:01:13.984 ************ 2025-05-04 00:43:06.118478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:06.118927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:06.120131 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:06.120850 | orchestrator | 2025-05-04 00:43:06.123773 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-04 00:43:06.124583 | orchestrator | Sunday 04 May 2025 00:43:06 +0000 (0:00:00.180) 0:01:14.165 ************ 2025-05-04 00:43:06.292582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:06.292824 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:06.292915 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:06.293418 | orchestrator | 2025-05-04 00:43:06.294122 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-04 00:43:06.294313 | orchestrator | Sunday 04 May 2025 00:43:06 +0000 (0:00:00.173) 0:01:14.338 ************ 2025-05-04 00:43:06.463735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:06.463982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:06.464838 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:06.465624 | orchestrator | 2025-05-04 00:43:06.465872 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-04 00:43:06.466590 | orchestrator | Sunday 04 May 2025 00:43:06 +0000 (0:00:00.172) 0:01:14.511 ************ 2025-05-04 00:43:06.627438 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:06.627660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:06.628456 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:06.629832 | orchestrator | 2025-05-04 00:43:06.630285 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-04 00:43:06.630987 | orchestrator | Sunday 04 May 2025 00:43:06 +0000 (0:00:00.162) 0:01:14.674 ************ 2025-05-04 00:43:06.824680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:06.825183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:06.825904 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:06.827554 | orchestrator | 2025-05-04 00:43:06.828291 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-04 00:43:06.828848 | orchestrator | Sunday 04 May 2025 00:43:06 +0000 (0:00:00.196) 0:01:14.870 ************ 2025-05-04 00:43:06.996430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:06.997898 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:06.998828 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:06.999893 | orchestrator | 2025-05-04 00:43:07.000259 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-04 00:43:07.000834 | orchestrator | Sunday 04 May 2025 00:43:06 +0000 (0:00:00.172) 0:01:15.043 ************ 2025-05-04 00:43:07.168560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:07.168864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:07.169708 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:07.170532 | orchestrator | 2025-05-04 00:43:07.171281 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-04 00:43:07.172611 | orchestrator | Sunday 04 May 2025 00:43:07 +0000 (0:00:00.171) 0:01:15.215 ************ 2025-05-04 00:43:07.670534 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:07.670778 | orchestrator | 2025-05-04 00:43:07.671752 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-04 00:43:07.672688 | orchestrator | Sunday 04 May 2025 00:43:07 +0000 (0:00:00.501) 0:01:15.717 ************ 2025-05-04 00:43:08.192367 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:08.192549 | orchestrator | 2025-05-04 00:43:08.193394 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-04 00:43:08.194605 | orchestrator | Sunday 04 May 2025 00:43:08 +0000 (0:00:00.520) 0:01:16.238 ************ 2025-05-04 00:43:08.348604 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:08.349559 | orchestrator | 2025-05-04 00:43:08.350592 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-04 00:43:08.350987 | orchestrator | Sunday 04 May 2025 00:43:08 +0000 (0:00:00.158) 0:01:16.396 ************ 2025-05-04 00:43:08.532541 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'vg_name': 'ceph-98453abf-c748-514f-aec7-544322a7c940'}) 2025-05-04 00:43:08.533832 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'vg_name': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'}) 2025-05-04 00:43:08.533887 | orchestrator | 2025-05-04 00:43:08.534890 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-04 00:43:08.535893 | orchestrator | Sunday 04 May 2025 00:43:08 +0000 (0:00:00.183) 0:01:16.579 ************ 2025-05-04 00:43:08.715662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:08.716504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:08.717134 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:08.717692 | orchestrator | 2025-05-04 00:43:08.718575 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-04 00:43:08.719287 | orchestrator | Sunday 04 May 2025 00:43:08 +0000 (0:00:00.183) 0:01:16.763 ************ 2025-05-04 00:43:08.889351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:08.889601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:08.890751 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:08.892116 | orchestrator | 2025-05-04 00:43:08.892876 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-04 00:43:08.893272 | orchestrator | Sunday 04 May 2025 00:43:08 +0000 (0:00:00.170) 0:01:16.933 ************ 2025-05-04 00:43:09.093379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'})  2025-05-04 00:43:09.093631 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'})  2025-05-04 00:43:09.094675 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:09.095453 | orchestrator | 2025-05-04 00:43:09.096223 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-04 00:43:09.097197 | orchestrator | Sunday 04 May 2025 00:43:09 +0000 (0:00:00.205) 0:01:17.139 ************ 2025-05-04 00:43:09.775238 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 00:43:09.776626 | orchestrator |  "lvm_report": { 2025-05-04 00:43:09.776672 | orchestrator |  "lv": [ 2025-05-04 00:43:09.778101 | orchestrator |  { 2025-05-04 00:43:09.779259 | orchestrator |  "lv_name": "osd-block-98453abf-c748-514f-aec7-544322a7c940", 2025-05-04 00:43:09.779851 | orchestrator |  "vg_name": "ceph-98453abf-c748-514f-aec7-544322a7c940" 2025-05-04 00:43:09.781010 | orchestrator |  }, 2025-05-04 00:43:09.781809 | orchestrator |  { 2025-05-04 00:43:09.782711 | orchestrator |  "lv_name": "osd-block-f54bf35c-9381-504c-8591-afe4d3e61469", 2025-05-04 00:43:09.783466 | orchestrator |  "vg_name": "ceph-f54bf35c-9381-504c-8591-afe4d3e61469" 2025-05-04 00:43:09.785086 | orchestrator |  } 2025-05-04 00:43:09.786687 | orchestrator |  ], 2025-05-04 00:43:09.787071 | orchestrator |  "pv": [ 2025-05-04 00:43:09.788526 | orchestrator |  { 2025-05-04 00:43:09.789584 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-04 00:43:09.790206 | orchestrator |  "vg_name": "ceph-98453abf-c748-514f-aec7-544322a7c940" 2025-05-04 00:43:09.790840 | orchestrator |  }, 2025-05-04 00:43:09.791618 | orchestrator |  { 2025-05-04 00:43:09.792338 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-04 00:43:09.792977 | orchestrator |  "vg_name": "ceph-f54bf35c-9381-504c-8591-afe4d3e61469" 2025-05-04 00:43:09.793730 | orchestrator |  } 2025-05-04 00:43:09.794947 | orchestrator |  ] 2025-05-04 00:43:09.795928 | orchestrator |  } 2025-05-04 00:43:09.796359 | orchestrator | } 2025-05-04 00:43:09.797149 | orchestrator | 2025-05-04 00:43:09.797923 | orchestrator | 2025-05-04 00:43:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:43:09.798102 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:43:09.798414 | orchestrator | 2025-05-04 00:43:09 | INFO  | Please wait and do not abort execution. 2025-05-04 00:43:09.799933 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-04 00:43:09.801196 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-04 00:43:09.801946 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-04 00:43:09.802935 | orchestrator | 2025-05-04 00:43:09.803430 | orchestrator | 2025-05-04 00:43:09.803914 | orchestrator | 2025-05-04 00:43:09.805753 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:43:09.806525 | orchestrator | Sunday 04 May 2025 00:43:09 +0000 (0:00:00.680) 0:01:17.820 ************ 2025-05-04 00:43:09.807114 | orchestrator | =============================================================================== 2025-05-04 00:43:09.807760 | orchestrator | Create block VGs -------------------------------------------------------- 5.94s 2025-05-04 00:43:09.808185 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2025-05-04 00:43:09.808509 | orchestrator | Print LVM report data --------------------------------------------------- 2.60s 2025-05-04 00:43:09.809086 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.93s 2025-05-04 00:43:09.809902 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.80s 2025-05-04 00:43:09.811013 | orchestrator | Add known links to the list of available block devices ------------------ 1.66s 2025-05-04 00:43:09.811924 | orchestrator | Add known partitions to the list of available block devices ------------- 1.64s 2025-05-04 00:43:09.812143 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-05-04 00:43:09.813242 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-05-04 00:43:09.814154 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-05-04 00:43:09.814743 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-05-04 00:43:09.815568 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-05-04 00:43:09.816065 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2025-05-04 00:43:09.817783 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.81s 2025-05-04 00:43:09.818402 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.78s 2025-05-04 00:43:09.818951 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-05-04 00:43:09.819675 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2025-05-04 00:43:09.820432 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_devices ------------------------- 0.74s 2025-05-04 00:43:09.820727 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2025-05-04 00:43:09.821095 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2025-05-04 00:43:11.774460 | orchestrator | 2025-05-04 00:43:11 | INFO  | Task 1dd9ac6c-eaa6-4ef9-9029-0593bb184407 (facts) was prepared for execution. 2025-05-04 00:43:11.774656 | orchestrator | 2025-05-04 00:43:11 | INFO  | It takes a moment until task 1dd9ac6c-eaa6-4ef9-9029-0593bb184407 (facts) has been started and output is visible here. 2025-05-04 00:43:14.709079 | orchestrator | 2025-05-04 00:43:14.709866 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-04 00:43:14.710998 | orchestrator | 2025-05-04 00:43:14.715768 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-04 00:43:15.558944 | orchestrator | Sunday 04 May 2025 00:43:14 +0000 (0:00:00.147) 0:00:00.147 ************ 2025-05-04 00:43:15.559089 | orchestrator | ok: [testbed-manager] 2025-05-04 00:43:15.562565 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:43:15.563121 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:43:15.563150 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:43:15.563165 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:43:15.563179 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:43:15.563199 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:15.563747 | orchestrator | 2025-05-04 00:43:15.564437 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-04 00:43:15.565322 | orchestrator | Sunday 04 May 2025 00:43:15 +0000 (0:00:00.848) 0:00:00.995 ************ 2025-05-04 00:43:15.685253 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:43:15.747246 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:43:15.808780 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:43:15.870237 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:43:15.973300 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:43:16.610334 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:43:16.610873 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:16.611750 | orchestrator | 2025-05-04 00:43:16.615263 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-04 00:43:16.616144 | orchestrator | 2025-05-04 00:43:16.616884 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-04 00:43:16.618083 | orchestrator | Sunday 04 May 2025 00:43:16 +0000 (0:00:01.052) 0:00:02.048 ************ 2025-05-04 00:43:21.043550 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:43:21.043965 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:43:21.045422 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:43:21.047073 | orchestrator | ok: [testbed-manager] 2025-05-04 00:43:21.047413 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:43:21.049438 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:43:21.049962 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:43:21.051405 | orchestrator | 2025-05-04 00:43:21.051779 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-04 00:43:21.052853 | orchestrator | 2025-05-04 00:43:21.053548 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-04 00:43:21.054479 | orchestrator | Sunday 04 May 2025 00:43:21 +0000 (0:00:04.432) 0:00:06.481 ************ 2025-05-04 00:43:21.389632 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:43:21.471663 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:43:21.552488 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:43:21.636089 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:43:21.716550 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:43:21.761944 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:43:21.762325 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:43:21.763326 | orchestrator | 2025-05-04 00:43:21.764270 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:43:21.764691 | orchestrator | 2025-05-04 00:43:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-04 00:43:21.764928 | orchestrator | 2025-05-04 00:43:21 | INFO  | Please wait and do not abort execution. 2025-05-04 00:43:21.766111 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.767006 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.767947 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.768550 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.769283 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.769999 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.770489 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:43:21.770963 | orchestrator | 2025-05-04 00:43:21.771650 | orchestrator | Sunday 04 May 2025 00:43:21 +0000 (0:00:00.719) 0:00:07.201 ************ 2025-05-04 00:43:21.772212 | orchestrator | =============================================================================== 2025-05-04 00:43:21.772925 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.43s 2025-05-04 00:43:21.773489 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-05-04 00:43:21.773998 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.85s 2025-05-04 00:43:21.774742 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-05-04 00:43:22.420313 | orchestrator | 2025-05-04 00:43:22.422203 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun May 4 00:43:22 UTC 2025 2025-05-04 00:43:23.889724 | orchestrator | 2025-05-04 00:43:23.889886 | orchestrator | 2025-05-04 00:43:23 | INFO  | Collection nutshell is prepared for execution 2025-05-04 00:43:23.894167 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [0] - dotfiles 2025-05-04 00:43:23.894238 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [0] - homer 2025-05-04 00:43:23.894305 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [0] - netdata 2025-05-04 00:43:23.894323 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [0] - openstackclient 2025-05-04 00:43:23.894342 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [0] - phpmyadmin 2025-05-04 00:43:23.895774 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [0] - common 2025-05-04 00:43:23.895831 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [1] -- loadbalancer 2025-05-04 00:43:23.895928 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [2] --- opensearch 2025-05-04 00:43:23.895952 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [2] --- mariadb-ng 2025-05-04 00:43:23.896852 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [3] ---- horizon 2025-05-04 00:43:23.897068 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [3] ---- keystone 2025-05-04 00:43:23.897092 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [4] ----- neutron 2025-05-04 00:43:23.897107 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [5] ------ wait-for-nova 2025-05-04 00:43:23.897123 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [5] ------ octavia 2025-05-04 00:43:23.897142 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [4] ----- barbican 2025-05-04 00:43:23.897198 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [4] ----- designate 2025-05-04 00:43:23.897216 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [4] ----- ironic 2025-05-04 00:43:23.897234 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [4] ----- placement 2025-05-04 00:43:23.897287 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [4] ----- magnum 2025-05-04 00:43:23.897308 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [1] -- openvswitch 2025-05-04 00:43:23.897689 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [2] --- ovn 2025-05-04 00:43:23.897972 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [1] -- memcached 2025-05-04 00:43:23.897996 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [1] -- redis 2025-05-04 00:43:23.898010 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [1] -- rabbitmq-ng 2025-05-04 00:43:23.898094 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [0] - kubernetes 2025-05-04 00:43:23.898114 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [1] -- kubeconfig 2025-05-04 00:43:23.899294 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [1] -- copy-kubeconfig 2025-05-04 00:43:23.899319 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [0] - ceph 2025-05-04 00:43:23.899339 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [1] -- ceph-pools 2025-05-04 00:43:23.899771 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [2] --- copy-ceph-keys 2025-05-04 00:43:23.899793 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [3] ---- cephclient 2025-05-04 00:43:23.899833 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-04 00:43:23.899892 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [4] ----- wait-for-keystone 2025-05-04 00:43:23.899910 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-04 00:43:23.899961 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [5] ------ glance 2025-05-04 00:43:23.900312 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [5] ------ cinder 2025-05-04 00:43:23.900342 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [5] ------ nova 2025-05-04 00:43:24.036350 | orchestrator | 2025-05-04 00:43:23 | INFO  | A [4] ----- prometheus 2025-05-04 00:43:24.036452 | orchestrator | 2025-05-04 00:43:23 | INFO  | D [5] ------ grafana 2025-05-04 00:43:24.036473 | orchestrator | 2025-05-04 00:43:24 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-04 00:43:25.932001 | orchestrator | 2025-05-04 00:43:24 | INFO  | Tasks are running in the background 2025-05-04 00:43:25.932154 | orchestrator | 2025-05-04 00:43:25 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-04 00:43:28.036451 | orchestrator | 2025-05-04 00:43:28 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:28.036757 | orchestrator | 2025-05-04 00:43:28 | INFO  | Task 95ee6716-3334-49de-a6e8-d279e4c62a1e is in state STARTED 2025-05-04 00:43:28.037488 | orchestrator | 2025-05-04 00:43:28 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:28.037968 | orchestrator | 2025-05-04 00:43:28 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:28.043719 | orchestrator | 2025-05-04 00:43:28 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:28.044123 | orchestrator | 2025-05-04 00:43:28 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:31.092283 | orchestrator | 2025-05-04 00:43:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:31.092407 | orchestrator | 2025-05-04 00:43:31 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:31.092494 | orchestrator | 2025-05-04 00:43:31 | INFO  | Task 95ee6716-3334-49de-a6e8-d279e4c62a1e is in state STARTED 2025-05-04 00:43:31.096480 | orchestrator | 2025-05-04 00:43:31 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:31.096959 | orchestrator | 2025-05-04 00:43:31 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:31.097427 | orchestrator | 2025-05-04 00:43:31 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:31.098202 | orchestrator | 2025-05-04 00:43:31 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:34.153708 | orchestrator | 2025-05-04 00:43:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:34.153868 | orchestrator | 2025-05-04 00:43:34 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:34.157528 | orchestrator | 2025-05-04 00:43:34 | INFO  | Task 95ee6716-3334-49de-a6e8-d279e4c62a1e is in state STARTED 2025-05-04 00:43:34.157622 | orchestrator | 2025-05-04 00:43:34 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:34.157654 | orchestrator | 2025-05-04 00:43:34 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:34.157745 | orchestrator | 2025-05-04 00:43:34 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:34.157770 | orchestrator | 2025-05-04 00:43:34 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:37.223288 | orchestrator | 2025-05-04 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:37.223411 | orchestrator | 2025-05-04 00:43:37 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:37.224958 | orchestrator | 2025-05-04 00:43:37 | INFO  | Task 95ee6716-3334-49de-a6e8-d279e4c62a1e is in state STARTED 2025-05-04 00:43:37.224994 | orchestrator | 2025-05-04 00:43:37 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:37.227444 | orchestrator | 2025-05-04 00:43:37 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:40.277742 | orchestrator | 2025-05-04 00:43:37 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:40.277895 | orchestrator | 2025-05-04 00:43:37 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:40.277917 | orchestrator | 2025-05-04 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:40.277949 | orchestrator | 2025-05-04 00:43:40 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:43.331231 | orchestrator | 2025-05-04 00:43:40 | INFO  | Task 95ee6716-3334-49de-a6e8-d279e4c62a1e is in state STARTED 2025-05-04 00:43:43.331338 | orchestrator | 2025-05-04 00:43:40 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:43.331359 | orchestrator | 2025-05-04 00:43:40 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:43.331374 | orchestrator | 2025-05-04 00:43:40 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:43.331389 | orchestrator | 2025-05-04 00:43:40 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:43.331404 | orchestrator | 2025-05-04 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:43.331435 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:43.332607 | orchestrator | 2025-05-04 00:43:43.332650 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-04 00:43:43.332665 | orchestrator | 2025-05-04 00:43:43.332680 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-04 00:43:43.332694 | orchestrator | Sunday 04 May 2025 00:43:30 +0000 (0:00:00.472) 0:00:00.472 ************ 2025-05-04 00:43:43.332709 | orchestrator | changed: [testbed-manager] 2025-05-04 00:43:43.332725 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:43:43.332740 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:43:43.332754 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:43:43.332768 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:43:43.332782 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:43:43.332796 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:43:43.332852 | orchestrator | 2025-05-04 00:43:43.332881 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-04 00:43:43.332908 | orchestrator | Sunday 04 May 2025 00:43:34 +0000 (0:00:03.736) 0:00:04.208 ************ 2025-05-04 00:43:43.332923 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-04 00:43:43.332937 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-04 00:43:43.332956 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-04 00:43:43.332971 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-04 00:43:43.332984 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-04 00:43:43.332998 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-04 00:43:43.333012 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-04 00:43:43.333026 | orchestrator | 2025-05-04 00:43:43.333040 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-04 00:43:43.333054 | orchestrator | Sunday 04 May 2025 00:43:36 +0000 (0:00:01.840) 0:00:06.049 ************ 2025-05-04 00:43:43.333091 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:35.549665', 'end': '2025-05-04 00:43:35.557345', 'delta': '0:00:00.007680', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333115 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:35.548798', 'end': '2025-05-04 00:43:35.553770', 'delta': '0:00:00.004972', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333130 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:35.585188', 'end': '2025-05-04 00:43:35.593784', 'delta': '0:00:00.008596', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333169 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:35.708461', 'end': '2025-05-04 00:43:35.716393', 'delta': '0:00:00.007932', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333185 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:35.815166', 'end': '2025-05-04 00:43:35.825692', 'delta': '0:00:00.010526', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333207 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:36.120752', 'end': '2025-05-04 00:43:36.125803', 'delta': '0:00:00.005051', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333228 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-04 00:43:36.280658', 'end': '2025-05-04 00:43:36.289944', 'delta': '0:00:00.009286', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-04 00:43:43.333244 | orchestrator | 2025-05-04 00:43:43.333262 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-04 00:43:43.333286 | orchestrator | Sunday 04 May 2025 00:43:39 +0000 (0:00:02.491) 0:00:08.540 ************ 2025-05-04 00:43:43.333304 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-04 00:43:43.333320 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-04 00:43:43.333336 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-04 00:43:43.333351 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-04 00:43:43.333368 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-04 00:43:43.333384 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-04 00:43:43.333399 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-04 00:43:43.333416 | orchestrator | 2025-05-04 00:43:43.333432 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:43:43.333448 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333465 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333482 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333505 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333558 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333585 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333610 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:43:43.333646 | orchestrator | 2025-05-04 00:43:43.333670 | orchestrator | Sunday 04 May 2025 00:43:41 +0000 (0:00:02.351) 0:00:10.892 ************ 2025-05-04 00:43:43.333696 | orchestrator | =============================================================================== 2025-05-04 00:43:43.333721 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.74s 2025-05-04 00:43:43.333735 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.49s 2025-05-04 00:43:43.333750 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.35s 2025-05-04 00:43:43.333765 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.84s 2025-05-04 00:43:43.333785 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task 95ee6716-3334-49de-a6e8-d279e4c62a1e is in state SUCCESS 2025-05-04 00:43:43.333896 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:43:43.335854 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:43.336421 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:43.337238 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:43.339363 | orchestrator | 2025-05-04 00:43:43 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:43.339504 | orchestrator | 2025-05-04 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:46.399908 | orchestrator | 2025-05-04 00:43:46 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:46.400138 | orchestrator | 2025-05-04 00:43:46 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:43:46.401539 | orchestrator | 2025-05-04 00:43:46 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:46.403405 | orchestrator | 2025-05-04 00:43:46 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:46.403931 | orchestrator | 2025-05-04 00:43:46 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:46.404846 | orchestrator | 2025-05-04 00:43:46 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:46.405492 | orchestrator | 2025-05-04 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:49.468286 | orchestrator | 2025-05-04 00:43:49 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:49.468674 | orchestrator | 2025-05-04 00:43:49 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:43:49.469763 | orchestrator | 2025-05-04 00:43:49 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:49.470627 | orchestrator | 2025-05-04 00:43:49 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:49.471027 | orchestrator | 2025-05-04 00:43:49 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:49.471098 | orchestrator | 2025-05-04 00:43:49 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:52.547609 | orchestrator | 2025-05-04 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:52.547745 | orchestrator | 2025-05-04 00:43:52 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:52.559998 | orchestrator | 2025-05-04 00:43:52 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:43:52.586409 | orchestrator | 2025-05-04 00:43:52 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:52.586508 | orchestrator | 2025-05-04 00:43:52 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:55.624572 | orchestrator | 2025-05-04 00:43:52 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:55.624703 | orchestrator | 2025-05-04 00:43:52 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:55.624724 | orchestrator | 2025-05-04 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:55.624758 | orchestrator | 2025-05-04 00:43:55 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:43:55.625648 | orchestrator | 2025-05-04 00:43:55 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:43:55.628001 | orchestrator | 2025-05-04 00:43:55 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:43:55.630901 | orchestrator | 2025-05-04 00:43:55 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:43:55.634204 | orchestrator | 2025-05-04 00:43:55 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:43:55.643293 | orchestrator | 2025-05-04 00:43:55 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:43:58.724089 | orchestrator | 2025-05-04 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:43:58.724251 | orchestrator | 2025-05-04 00:43:58 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:01.776015 | orchestrator | 2025-05-04 00:43:58 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:01.776164 | orchestrator | 2025-05-04 00:43:58 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:01.776187 | orchestrator | 2025-05-04 00:43:58 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:01.776203 | orchestrator | 2025-05-04 00:43:58 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:01.776219 | orchestrator | 2025-05-04 00:43:58 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:44:01.776234 | orchestrator | 2025-05-04 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:01.776292 | orchestrator | 2025-05-04 00:44:01 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:01.777943 | orchestrator | 2025-05-04 00:44:01 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:01.781561 | orchestrator | 2025-05-04 00:44:01 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:01.784009 | orchestrator | 2025-05-04 00:44:01 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:01.788469 | orchestrator | 2025-05-04 00:44:01 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:01.790314 | orchestrator | 2025-05-04 00:44:01 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:44:04.844215 | orchestrator | 2025-05-04 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:04.844383 | orchestrator | 2025-05-04 00:44:04 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:07.916425 | orchestrator | 2025-05-04 00:44:04 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:07.916590 | orchestrator | 2025-05-04 00:44:04 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:07.916611 | orchestrator | 2025-05-04 00:44:04 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:07.916626 | orchestrator | 2025-05-04 00:44:04 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:07.916641 | orchestrator | 2025-05-04 00:44:04 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state STARTED 2025-05-04 00:44:07.916656 | orchestrator | 2025-05-04 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:07.916689 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:07.917819 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:07.920297 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:07.923748 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:07.928641 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:10.976318 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task 37c054a1-f743-4abd-8ec3-5cdee04d7ce4 is in state SUCCESS 2025-05-04 00:44:10.976488 | orchestrator | 2025-05-04 00:44:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:10.976512 | orchestrator | 2025-05-04 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:10.976545 | orchestrator | 2025-05-04 00:44:10 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:10.978368 | orchestrator | 2025-05-04 00:44:10 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:10.981783 | orchestrator | 2025-05-04 00:44:10 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:10.982701 | orchestrator | 2025-05-04 00:44:10 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:10.985642 | orchestrator | 2025-05-04 00:44:10 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:10.988672 | orchestrator | 2025-05-04 00:44:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:14.036439 | orchestrator | 2025-05-04 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:14.036537 | orchestrator | 2025-05-04 00:44:14 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:14.040892 | orchestrator | 2025-05-04 00:44:14 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:14.042244 | orchestrator | 2025-05-04 00:44:14 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:14.042284 | orchestrator | 2025-05-04 00:44:14 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:14.042302 | orchestrator | 2025-05-04 00:44:14 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:14.042314 | orchestrator | 2025-05-04 00:44:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:14.042332 | orchestrator | 2025-05-04 00:44:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:17.085610 | orchestrator | 2025-05-04 00:44:17 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:17.086473 | orchestrator | 2025-05-04 00:44:17 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:17.086544 | orchestrator | 2025-05-04 00:44:17 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:17.087475 | orchestrator | 2025-05-04 00:44:17 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:17.090456 | orchestrator | 2025-05-04 00:44:17 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:17.091024 | orchestrator | 2025-05-04 00:44:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:17.091304 | orchestrator | 2025-05-04 00:44:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:20.136824 | orchestrator | 2025-05-04 00:44:20 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:20.137189 | orchestrator | 2025-05-04 00:44:20 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:20.155801 | orchestrator | 2025-05-04 00:44:20 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:23.203569 | orchestrator | 2025-05-04 00:44:20 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:23.203690 | orchestrator | 2025-05-04 00:44:20 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:23.203712 | orchestrator | 2025-05-04 00:44:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:23.203729 | orchestrator | 2025-05-04 00:44:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:23.203762 | orchestrator | 2025-05-04 00:44:23 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:23.208485 | orchestrator | 2025-05-04 00:44:23 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:23.214092 | orchestrator | 2025-05-04 00:44:23 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:23.217966 | orchestrator | 2025-05-04 00:44:23 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:23.218698 | orchestrator | 2025-05-04 00:44:23 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:23.219615 | orchestrator | 2025-05-04 00:44:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:26.279619 | orchestrator | 2025-05-04 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:26.279746 | orchestrator | 2025-05-04 00:44:26 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:26.281684 | orchestrator | 2025-05-04 00:44:26 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:26.285290 | orchestrator | 2025-05-04 00:44:26 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:26.289455 | orchestrator | 2025-05-04 00:44:26 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:26.289932 | orchestrator | 2025-05-04 00:44:26 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state STARTED 2025-05-04 00:44:26.290501 | orchestrator | 2025-05-04 00:44:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:29.327769 | orchestrator | 2025-05-04 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:29.327943 | orchestrator | 2025-05-04 00:44:29 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:32.369580 | orchestrator | 2025-05-04 00:44:29 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:32.369710 | orchestrator | 2025-05-04 00:44:29 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:32.369732 | orchestrator | 2025-05-04 00:44:29 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:32.369747 | orchestrator | 2025-05-04 00:44:29 | INFO  | Task 3ddcde56-eb15-4af4-8cb3-a799624ee208 is in state SUCCESS 2025-05-04 00:44:32.369762 | orchestrator | 2025-05-04 00:44:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:32.369777 | orchestrator | 2025-05-04 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:32.369806 | orchestrator | 2025-05-04 00:44:32 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:32.369979 | orchestrator | 2025-05-04 00:44:32 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:32.377134 | orchestrator | 2025-05-04 00:44:32 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:32.380143 | orchestrator | 2025-05-04 00:44:32 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:32.383141 | orchestrator | 2025-05-04 00:44:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:35.428700 | orchestrator | 2025-05-04 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:35.428820 | orchestrator | 2025-05-04 00:44:35 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:35.429880 | orchestrator | 2025-05-04 00:44:35 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:35.431007 | orchestrator | 2025-05-04 00:44:35 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state STARTED 2025-05-04 00:44:35.432269 | orchestrator | 2025-05-04 00:44:35 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:35.433417 | orchestrator | 2025-05-04 00:44:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:38.478793 | orchestrator | 2025-05-04 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:38.478993 | orchestrator | 2025-05-04 00:44:38 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:38.482483 | orchestrator | 2025-05-04 00:44:38 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:38.482683 | orchestrator | 2025-05-04 00:44:38 | INFO  | Task 561fce98-e80d-4907-83f7-50f4b1527637 is in state SUCCESS 2025-05-04 00:44:38.484378 | orchestrator | 2025-05-04 00:44:38.484427 | orchestrator | 2025-05-04 00:44:38.484443 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-04 00:44:38.484458 | orchestrator | 2025-05-04 00:44:38.484473 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-04 00:44:38.484488 | orchestrator | Sunday 04 May 2025 00:43:31 +0000 (0:00:00.481) 0:00:00.481 ************ 2025-05-04 00:44:38.484503 | orchestrator | ok: [testbed-manager] => { 2025-05-04 00:44:38.484518 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-04 00:44:38.484534 | orchestrator | } 2025-05-04 00:44:38.484584 | orchestrator | 2025-05-04 00:44:38.484600 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-04 00:44:38.484614 | orchestrator | Sunday 04 May 2025 00:43:31 +0000 (0:00:00.330) 0:00:00.812 ************ 2025-05-04 00:44:38.484628 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.484643 | orchestrator | 2025-05-04 00:44:38.484658 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-04 00:44:38.484690 | orchestrator | Sunday 04 May 2025 00:43:32 +0000 (0:00:01.451) 0:00:02.263 ************ 2025-05-04 00:44:38.484705 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-04 00:44:38.484719 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-04 00:44:38.484733 | orchestrator | 2025-05-04 00:44:38.484748 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-04 00:44:38.484771 | orchestrator | Sunday 04 May 2025 00:43:33 +0000 (0:00:00.878) 0:00:03.141 ************ 2025-05-04 00:44:38.484791 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.484805 | orchestrator | 2025-05-04 00:44:38.484820 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-04 00:44:38.484834 | orchestrator | Sunday 04 May 2025 00:43:36 +0000 (0:00:02.360) 0:00:05.502 ************ 2025-05-04 00:44:38.484848 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.484899 | orchestrator | 2025-05-04 00:44:38.484924 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-04 00:44:38.484948 | orchestrator | Sunday 04 May 2025 00:43:38 +0000 (0:00:01.990) 0:00:07.493 ************ 2025-05-04 00:44:38.484965 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-04 00:44:38.484980 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.484997 | orchestrator | 2025-05-04 00:44:38.485013 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-04 00:44:38.485029 | orchestrator | Sunday 04 May 2025 00:44:02 +0000 (0:00:24.852) 0:00:32.345 ************ 2025-05-04 00:44:38.485045 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.485062 | orchestrator | 2025-05-04 00:44:38.485078 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:44:38.485095 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.485112 | orchestrator | 2025-05-04 00:44:38.485128 | orchestrator | Sunday 04 May 2025 00:44:04 +0000 (0:00:01.968) 0:00:34.314 ************ 2025-05-04 00:44:38.485144 | orchestrator | =============================================================================== 2025-05-04 00:44:38.485160 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.85s 2025-05-04 00:44:38.485176 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.36s 2025-05-04 00:44:38.485191 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.99s 2025-05-04 00:44:38.485212 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.97s 2025-05-04 00:44:38.485229 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.45s 2025-05-04 00:44:38.485245 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.88s 2025-05-04 00:44:38.485261 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.33s 2025-05-04 00:44:38.485277 | orchestrator | 2025-05-04 00:44:38.485292 | orchestrator | 2025-05-04 00:44:38.485308 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-04 00:44:38.485324 | orchestrator | 2025-05-04 00:44:38.485339 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-04 00:44:38.485354 | orchestrator | Sunday 04 May 2025 00:43:32 +0000 (0:00:00.216) 0:00:00.216 ************ 2025-05-04 00:44:38.485368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-04 00:44:38.485383 | orchestrator | 2025-05-04 00:44:38.485398 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-04 00:44:38.485412 | orchestrator | Sunday 04 May 2025 00:43:32 +0000 (0:00:00.242) 0:00:00.458 ************ 2025-05-04 00:44:38.485426 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-04 00:44:38.485440 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-04 00:44:38.485463 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-04 00:44:38.485477 | orchestrator | 2025-05-04 00:44:38.485492 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-04 00:44:38.485506 | orchestrator | Sunday 04 May 2025 00:43:33 +0000 (0:00:01.111) 0:00:01.569 ************ 2025-05-04 00:44:38.485520 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.485534 | orchestrator | 2025-05-04 00:44:38.485548 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-04 00:44:38.485562 | orchestrator | Sunday 04 May 2025 00:43:34 +0000 (0:00:01.381) 0:00:02.951 ************ 2025-05-04 00:44:38.485577 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-04 00:44:38.485591 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.485605 | orchestrator | 2025-05-04 00:44:38.485630 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-04 00:44:38.485650 | orchestrator | Sunday 04 May 2025 00:44:19 +0000 (0:00:44.202) 0:00:47.154 ************ 2025-05-04 00:44:38.485665 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.485680 | orchestrator | 2025-05-04 00:44:38.485694 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-04 00:44:38.485708 | orchestrator | Sunday 04 May 2025 00:44:20 +0000 (0:00:01.468) 0:00:48.623 ************ 2025-05-04 00:44:38.485723 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.485737 | orchestrator | 2025-05-04 00:44:38.485752 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-04 00:44:38.485766 | orchestrator | Sunday 04 May 2025 00:44:21 +0000 (0:00:01.349) 0:00:49.972 ************ 2025-05-04 00:44:38.485780 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.485794 | orchestrator | 2025-05-04 00:44:38.485809 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-04 00:44:38.485823 | orchestrator | Sunday 04 May 2025 00:44:24 +0000 (0:00:02.666) 0:00:52.639 ************ 2025-05-04 00:44:38.485837 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.485852 | orchestrator | 2025-05-04 00:44:38.485893 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-04 00:44:38.485908 | orchestrator | Sunday 04 May 2025 00:44:25 +0000 (0:00:00.984) 0:00:53.623 ************ 2025-05-04 00:44:38.485922 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.485936 | orchestrator | 2025-05-04 00:44:38.485951 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-04 00:44:38.485965 | orchestrator | Sunday 04 May 2025 00:44:26 +0000 (0:00:00.784) 0:00:54.408 ************ 2025-05-04 00:44:38.485979 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.485993 | orchestrator | 2025-05-04 00:44:38.486007 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:44:38.486098 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.486118 | orchestrator | 2025-05-04 00:44:38.486133 | orchestrator | Sunday 04 May 2025 00:44:26 +0000 (0:00:00.513) 0:00:54.922 ************ 2025-05-04 00:44:38.486147 | orchestrator | =============================================================================== 2025-05-04 00:44:38.486161 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 44.20s 2025-05-04 00:44:38.486175 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.67s 2025-05-04 00:44:38.486190 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.47s 2025-05-04 00:44:38.486209 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.38s 2025-05-04 00:44:38.486224 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.35s 2025-05-04 00:44:38.486239 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.11s 2025-05-04 00:44:38.486253 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.98s 2025-05-04 00:44:38.486277 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.78s 2025-05-04 00:44:38.486291 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.51s 2025-05-04 00:44:38.486306 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2025-05-04 00:44:38.486320 | orchestrator | 2025-05-04 00:44:38.486335 | orchestrator | 2025-05-04 00:44:38.486349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:44:38.486364 | orchestrator | 2025-05-04 00:44:38.486378 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:44:38.486392 | orchestrator | Sunday 04 May 2025 00:43:30 +0000 (0:00:00.144) 0:00:00.144 ************ 2025-05-04 00:44:38.486406 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-04 00:44:38.486421 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-04 00:44:38.486435 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-04 00:44:38.486449 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-04 00:44:38.486463 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-04 00:44:38.486477 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-04 00:44:38.486491 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-04 00:44:38.486505 | orchestrator | 2025-05-04 00:44:38.486519 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-04 00:44:38.486533 | orchestrator | 2025-05-04 00:44:38.486547 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-04 00:44:38.486562 | orchestrator | Sunday 04 May 2025 00:43:32 +0000 (0:00:01.513) 0:00:01.657 ************ 2025-05-04 00:44:38.486588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:44:38.486605 | orchestrator | 2025-05-04 00:44:38.486619 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-04 00:44:38.486634 | orchestrator | Sunday 04 May 2025 00:43:33 +0000 (0:00:01.613) 0:00:03.271 ************ 2025-05-04 00:44:38.486647 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.486661 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:44:38.486675 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:44:38.486689 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:44:38.486703 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:44:38.486717 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:44:38.486731 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:44:38.486749 | orchestrator | 2025-05-04 00:44:38.486770 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-04 00:44:38.486794 | orchestrator | Sunday 04 May 2025 00:43:36 +0000 (0:00:02.398) 0:00:05.672 ************ 2025-05-04 00:44:38.486809 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:44:38.486823 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.486837 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:44:38.486851 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:44:38.486901 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:44:38.486916 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:44:38.486930 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:44:38.486945 | orchestrator | 2025-05-04 00:44:38.486959 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-04 00:44:38.486973 | orchestrator | Sunday 04 May 2025 00:43:39 +0000 (0:00:03.420) 0:00:09.093 ************ 2025-05-04 00:44:38.486988 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.487002 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:44:38.487017 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:44:38.487031 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:44:38.487051 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:44:38.487073 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:44:38.487088 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:44:38.487102 | orchestrator | 2025-05-04 00:44:38.487116 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-04 00:44:38.487131 | orchestrator | Sunday 04 May 2025 00:43:42 +0000 (0:00:02.640) 0:00:11.734 ************ 2025-05-04 00:44:38.487145 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.487159 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:44:38.487174 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:44:38.487188 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:44:38.487202 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:44:38.487216 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:44:38.487231 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:44:38.487245 | orchestrator | 2025-05-04 00:44:38.487259 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-04 00:44:38.487273 | orchestrator | Sunday 04 May 2025 00:43:51 +0000 (0:00:09.341) 0:00:21.075 ************ 2025-05-04 00:44:38.487287 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:44:38.487301 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:44:38.487316 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:44:38.487330 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:44:38.487343 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:44:38.487358 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:44:38.487371 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.487386 | orchestrator | 2025-05-04 00:44:38.487400 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-04 00:44:38.487414 | orchestrator | Sunday 04 May 2025 00:44:12 +0000 (0:00:20.499) 0:00:41.574 ************ 2025-05-04 00:44:38.487429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:44:38.487448 | orchestrator | 2025-05-04 00:44:38.487463 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-04 00:44:38.487477 | orchestrator | Sunday 04 May 2025 00:44:14 +0000 (0:00:02.168) 0:00:43.743 ************ 2025-05-04 00:44:38.487492 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-04 00:44:38.487506 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-04 00:44:38.487520 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-04 00:44:38.487534 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-04 00:44:38.487548 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-04 00:44:38.487562 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-04 00:44:38.487577 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-04 00:44:38.487591 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-04 00:44:38.487605 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-04 00:44:38.487619 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-04 00:44:38.487633 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-04 00:44:38.487647 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-04 00:44:38.487661 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-04 00:44:38.487675 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-04 00:44:38.487689 | orchestrator | 2025-05-04 00:44:38.487704 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-04 00:44:38.487718 | orchestrator | Sunday 04 May 2025 00:44:21 +0000 (0:00:06.759) 0:00:50.502 ************ 2025-05-04 00:44:38.487732 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.487747 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:44:38.487761 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:44:38.487775 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:44:38.487789 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:44:38.487809 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:44:38.487823 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:44:38.487837 | orchestrator | 2025-05-04 00:44:38.487852 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-04 00:44:38.487888 | orchestrator | Sunday 04 May 2025 00:44:22 +0000 (0:00:01.608) 0:00:52.110 ************ 2025-05-04 00:44:38.487904 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:44:38.487918 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:44:38.487932 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.487946 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:44:38.487960 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:44:38.487975 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:44:38.487989 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:44:38.488003 | orchestrator | 2025-05-04 00:44:38.488018 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-04 00:44:38.488036 | orchestrator | Sunday 04 May 2025 00:44:25 +0000 (0:00:02.749) 0:00:54.860 ************ 2025-05-04 00:44:38.488051 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:44:38.488066 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:44:38.488080 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.488094 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:44:38.488115 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:44:38.488130 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:44:38.488144 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:44:38.488158 | orchestrator | 2025-05-04 00:44:38.488172 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-04 00:44:38.488187 | orchestrator | Sunday 04 May 2025 00:44:27 +0000 (0:00:02.304) 0:00:57.165 ************ 2025-05-04 00:44:38.488201 | orchestrator | ok: [testbed-manager] 2025-05-04 00:44:38.488216 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:44:38.488230 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:44:38.488244 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:44:38.488258 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:44:38.488307 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:44:38.488324 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:44:38.488338 | orchestrator | 2025-05-04 00:44:38.488353 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-04 00:44:38.488367 | orchestrator | Sunday 04 May 2025 00:44:30 +0000 (0:00:02.476) 0:00:59.641 ************ 2025-05-04 00:44:38.488382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-04 00:44:38.488398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:44:38.488413 | orchestrator | 2025-05-04 00:44:38.488427 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-04 00:44:38.488442 | orchestrator | Sunday 04 May 2025 00:44:31 +0000 (0:00:01.196) 0:01:00.837 ************ 2025-05-04 00:44:38.488456 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.488471 | orchestrator | 2025-05-04 00:44:38.488485 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-04 00:44:38.488500 | orchestrator | Sunday 04 May 2025 00:44:33 +0000 (0:00:02.283) 0:01:03.121 ************ 2025-05-04 00:44:38.488514 | orchestrator | changed: [testbed-manager] 2025-05-04 00:44:38.488528 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:44:38.488543 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:44:38.488557 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:44:38.488571 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:44:38.488586 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:44:38.488609 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:44:38.488624 | orchestrator | 2025-05-04 00:44:38.488639 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:44:38.488654 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488676 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488691 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488711 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488726 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488740 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488755 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:44:38.488769 | orchestrator | 2025-05-04 00:44:38.488784 | orchestrator | Sunday 04 May 2025 00:44:36 +0000 (0:00:03.087) 0:01:06.209 ************ 2025-05-04 00:44:38.488798 | orchestrator | =============================================================================== 2025-05-04 00:44:38.488813 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 20.50s 2025-05-04 00:44:38.488827 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.34s 2025-05-04 00:44:38.488842 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.76s 2025-05-04 00:44:38.488876 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.42s 2025-05-04 00:44:38.488892 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.09s 2025-05-04 00:44:38.488907 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.75s 2025-05-04 00:44:38.488921 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.64s 2025-05-04 00:44:38.488935 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.48s 2025-05-04 00:44:38.488950 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.40s 2025-05-04 00:44:38.488964 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.30s 2025-05-04 00:44:38.488979 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.28s 2025-05-04 00:44:38.488993 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.17s 2025-05-04 00:44:38.489008 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.61s 2025-05-04 00:44:38.489022 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.61s 2025-05-04 00:44:38.489044 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.51s 2025-05-04 00:44:38.489136 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.20s 2025-05-04 00:44:38.489159 | orchestrator | 2025-05-04 00:44:38 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:41.526935 | orchestrator | 2025-05-04 00:44:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:41.527093 | orchestrator | 2025-05-04 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:41.527150 | orchestrator | 2025-05-04 00:44:41 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:41.527239 | orchestrator | 2025-05-04 00:44:41 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:41.528064 | orchestrator | 2025-05-04 00:44:41 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:41.528741 | orchestrator | 2025-05-04 00:44:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:44.565993 | orchestrator | 2025-05-04 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:44.566135 | orchestrator | 2025-05-04 00:44:44 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:44.566814 | orchestrator | 2025-05-04 00:44:44 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:44.568399 | orchestrator | 2025-05-04 00:44:44 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:44.569676 | orchestrator | 2025-05-04 00:44:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:47.626154 | orchestrator | 2025-05-04 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:47.626299 | orchestrator | 2025-05-04 00:44:47 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:47.626626 | orchestrator | 2025-05-04 00:44:47 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state STARTED 2025-05-04 00:44:47.626659 | orchestrator | 2025-05-04 00:44:47 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:47.627647 | orchestrator | 2025-05-04 00:44:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:47.627800 | orchestrator | 2025-05-04 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:50.675748 | orchestrator | 2025-05-04 00:44:50 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:50.677543 | orchestrator | 2025-05-04 00:44:50 | INFO  | Task 5ec0659e-0ca7-4105-8d58-748cb9e2f0c5 is in state SUCCESS 2025-05-04 00:44:50.677591 | orchestrator | 2025-05-04 00:44:50 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:50.680675 | orchestrator | 2025-05-04 00:44:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:53.745713 | orchestrator | 2025-05-04 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:53.745844 | orchestrator | 2025-05-04 00:44:53 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:53.747692 | orchestrator | 2025-05-04 00:44:53 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:53.747735 | orchestrator | 2025-05-04 00:44:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:56.787725 | orchestrator | 2025-05-04 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:56.787858 | orchestrator | 2025-05-04 00:44:56 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:56.790329 | orchestrator | 2025-05-04 00:44:56 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:56.790363 | orchestrator | 2025-05-04 00:44:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:44:56.790385 | orchestrator | 2025-05-04 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:44:59.835378 | orchestrator | 2025-05-04 00:44:59 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:44:59.836908 | orchestrator | 2025-05-04 00:44:59 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:44:59.836972 | orchestrator | 2025-05-04 00:44:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:02.883451 | orchestrator | 2025-05-04 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:02.883635 | orchestrator | 2025-05-04 00:45:02 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:02.884823 | orchestrator | 2025-05-04 00:45:02 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:02.886208 | orchestrator | 2025-05-04 00:45:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:05.938165 | orchestrator | 2025-05-04 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:05.938327 | orchestrator | 2025-05-04 00:45:05 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:05.939116 | orchestrator | 2025-05-04 00:45:05 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:05.940316 | orchestrator | 2025-05-04 00:45:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:05.940419 | orchestrator | 2025-05-04 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:08.992949 | orchestrator | 2025-05-04 00:45:08 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:08.996201 | orchestrator | 2025-05-04 00:45:08 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:08.999456 | orchestrator | 2025-05-04 00:45:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:12.050467 | orchestrator | 2025-05-04 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:12.050625 | orchestrator | 2025-05-04 00:45:12 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:12.053150 | orchestrator | 2025-05-04 00:45:12 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:12.054007 | orchestrator | 2025-05-04 00:45:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:12.054155 | orchestrator | 2025-05-04 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:15.095847 | orchestrator | 2025-05-04 00:45:15 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:15.097610 | orchestrator | 2025-05-04 00:45:15 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:15.099017 | orchestrator | 2025-05-04 00:45:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:18.141403 | orchestrator | 2025-05-04 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:18.141561 | orchestrator | 2025-05-04 00:45:18 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:18.142611 | orchestrator | 2025-05-04 00:45:18 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:18.142654 | orchestrator | 2025-05-04 00:45:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:18.144015 | orchestrator | 2025-05-04 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:21.206818 | orchestrator | 2025-05-04 00:45:21 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:21.209851 | orchestrator | 2025-05-04 00:45:21 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:24.268148 | orchestrator | 2025-05-04 00:45:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:24.268389 | orchestrator | 2025-05-04 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:24.268433 | orchestrator | 2025-05-04 00:45:24 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:24.268558 | orchestrator | 2025-05-04 00:45:24 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:24.269577 | orchestrator | 2025-05-04 00:45:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:27.310361 | orchestrator | 2025-05-04 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:27.310510 | orchestrator | 2025-05-04 00:45:27 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:27.312942 | orchestrator | 2025-05-04 00:45:27 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:27.314840 | orchestrator | 2025-05-04 00:45:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:30.367873 | orchestrator | 2025-05-04 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:30.368122 | orchestrator | 2025-05-04 00:45:30 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:30.370954 | orchestrator | 2025-05-04 00:45:30 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:30.372926 | orchestrator | 2025-05-04 00:45:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:33.415676 | orchestrator | 2025-05-04 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:33.415836 | orchestrator | 2025-05-04 00:45:33 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:33.416792 | orchestrator | 2025-05-04 00:45:33 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:33.419022 | orchestrator | 2025-05-04 00:45:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:33.419710 | orchestrator | 2025-05-04 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:36.461718 | orchestrator | 2025-05-04 00:45:36 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:36.463851 | orchestrator | 2025-05-04 00:45:36 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:36.465640 | orchestrator | 2025-05-04 00:45:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:36.466297 | orchestrator | 2025-05-04 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:39.515063 | orchestrator | 2025-05-04 00:45:39 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:42.567850 | orchestrator | 2025-05-04 00:45:39 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:42.568080 | orchestrator | 2025-05-04 00:45:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:42.568132 | orchestrator | 2025-05-04 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:42.568167 | orchestrator | 2025-05-04 00:45:42 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:42.568264 | orchestrator | 2025-05-04 00:45:42 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:42.568288 | orchestrator | 2025-05-04 00:45:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:45.624511 | orchestrator | 2025-05-04 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:45.624697 | orchestrator | 2025-05-04 00:45:45 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:45.625541 | orchestrator | 2025-05-04 00:45:45 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:45.626452 | orchestrator | 2025-05-04 00:45:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:45.626935 | orchestrator | 2025-05-04 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:48.676010 | orchestrator | 2025-05-04 00:45:48 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:48.677024 | orchestrator | 2025-05-04 00:45:48 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state STARTED 2025-05-04 00:45:48.678979 | orchestrator | 2025-05-04 00:45:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:51.761155 | orchestrator | 2025-05-04 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:51.761371 | orchestrator | 2025-05-04 00:45:51 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:51.765378 | orchestrator | 2025-05-04 00:45:51 | INFO  | Task 47350cf7-b4ea-4814-af02-e1b8cee63370 is in state SUCCESS 2025-05-04 00:45:51.768056 | orchestrator | 2025-05-04 00:45:51.768122 | orchestrator | 2025-05-04 00:45:51.768149 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-04 00:45:51.768173 | orchestrator | 2025-05-04 00:45:51.768200 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-04 00:45:51.768309 | orchestrator | Sunday 04 May 2025 00:43:46 +0000 (0:00:00.184) 0:00:00.184 ************ 2025-05-04 00:45:51.768336 | orchestrator | ok: [testbed-manager] 2025-05-04 00:45:51.768355 | orchestrator | 2025-05-04 00:45:51.768370 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-04 00:45:51.768442 | orchestrator | Sunday 04 May 2025 00:43:46 +0000 (0:00:00.855) 0:00:01.039 ************ 2025-05-04 00:45:51.768459 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-04 00:45:51.768485 | orchestrator | 2025-05-04 00:45:51.768499 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-04 00:45:51.768514 | orchestrator | Sunday 04 May 2025 00:43:47 +0000 (0:00:00.816) 0:00:01.856 ************ 2025-05-04 00:45:51.768528 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.768543 | orchestrator | 2025-05-04 00:45:51.768559 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-04 00:45:51.768576 | orchestrator | Sunday 04 May 2025 00:43:49 +0000 (0:00:01.373) 0:00:03.230 ************ 2025-05-04 00:45:51.768592 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-04 00:45:51.768608 | orchestrator | ok: [testbed-manager] 2025-05-04 00:45:51.768625 | orchestrator | 2025-05-04 00:45:51.768642 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-04 00:45:51.768658 | orchestrator | Sunday 04 May 2025 00:44:46 +0000 (0:00:56.952) 0:01:00.183 ************ 2025-05-04 00:45:51.768674 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.768698 | orchestrator | 2025-05-04 00:45:51.768722 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:45:51.768744 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:45:51.768762 | orchestrator | 2025-05-04 00:45:51.768778 | orchestrator | Sunday 04 May 2025 00:44:49 +0000 (0:00:03.769) 0:01:03.952 ************ 2025-05-04 00:45:51.768798 | orchestrator | =============================================================================== 2025-05-04 00:45:51.768824 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.95s 2025-05-04 00:45:51.768850 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.77s 2025-05-04 00:45:51.768875 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.37s 2025-05-04 00:45:51.768958 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.86s 2025-05-04 00:45:51.768984 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.82s 2025-05-04 00:45:51.769010 | orchestrator | 2025-05-04 00:45:51.769032 | orchestrator | 2025-05-04 00:45:51.769052 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-04 00:45:51.769066 | orchestrator | 2025-05-04 00:45:51.769080 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-04 00:45:51.769095 | orchestrator | Sunday 04 May 2025 00:43:27 +0000 (0:00:00.266) 0:00:00.266 ************ 2025-05-04 00:45:51.769110 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:45:51.769126 | orchestrator | 2025-05-04 00:45:51.769140 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-04 00:45:51.769154 | orchestrator | Sunday 04 May 2025 00:43:28 +0000 (0:00:01.369) 0:00:01.635 ************ 2025-05-04 00:45:51.769169 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769183 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769197 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769211 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769226 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769242 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769256 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769270 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769284 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769299 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769313 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769327 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-04 00:45:51.769349 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769364 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769379 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769398 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769412 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-04 00:45:51.769443 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769469 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769494 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769517 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-04 00:45:51.769533 | orchestrator | 2025-05-04 00:45:51.769557 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-04 00:45:51.769582 | orchestrator | Sunday 04 May 2025 00:43:32 +0000 (0:00:04.013) 0:00:05.649 ************ 2025-05-04 00:45:51.769607 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:45:51.769654 | orchestrator | 2025-05-04 00:45:51.769681 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-04 00:45:51.769704 | orchestrator | Sunday 04 May 2025 00:43:34 +0000 (0:00:01.661) 0:00:07.311 ************ 2025-05-04 00:45:51.769730 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.769883 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.769898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.769945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.769961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.769982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770402 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.770547 | orchestrator | 2025-05-04 00:45:51.770562 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-04 00:45:51.770576 | orchestrator | Sunday 04 May 2025 00:43:39 +0000 (0:00:05.003) 0:00:12.314 ************ 2025-05-04 00:45:51.770619 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.770637 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770686 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:45:51.770712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.770735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770778 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:45:51.770800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.770835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770875 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:45:51.770890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.770932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.770984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771041 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:45:51.771055 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:45:51.771078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771123 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:45:51.771142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771218 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:45:51.771244 | orchestrator | 2025-05-04 00:45:51.771264 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-04 00:45:51.771288 | orchestrator | Sunday 04 May 2025 00:43:41 +0000 (0:00:01.785) 0:00:14.099 ************ 2025-05-04 00:45:51.771313 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771350 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771410 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771426 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:45:51.771441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771559 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:45:51.771573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771619 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:45:51.771633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771693 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:45:51.771707 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:45:51.771721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771774 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:45:51.771789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-04 00:45:51.771804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.771842 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:45:51.771856 | orchestrator | 2025-05-04 00:45:51.771871 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-04 00:45:51.771885 | orchestrator | Sunday 04 May 2025 00:43:44 +0000 (0:00:03.189) 0:00:17.289 ************ 2025-05-04 00:45:51.771900 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:45:51.771996 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:45:51.772011 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:45:51.772025 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:45:51.772039 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:45:51.772054 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:45:51.772068 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:45:51.772082 | orchestrator | 2025-05-04 00:45:51.772097 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-04 00:45:51.772111 | orchestrator | Sunday 04 May 2025 00:43:45 +0000 (0:00:01.003) 0:00:18.292 ************ 2025-05-04 00:45:51.772126 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:45:51.772139 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:45:51.772153 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:45:51.772167 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:45:51.772182 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:45:51.772196 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:45:51.772210 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:45:51.772224 | orchestrator | 2025-05-04 00:45:51.772238 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-04 00:45:51.772253 | orchestrator | Sunday 04 May 2025 00:43:46 +0000 (0:00:00.952) 0:00:19.245 ************ 2025-05-04 00:45:51.772267 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:45:51.772281 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.772295 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.772309 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.772323 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.772337 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.772350 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.772364 | orchestrator | 2025-05-04 00:45:51.772377 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-04 00:45:51.772391 | orchestrator | Sunday 04 May 2025 00:44:24 +0000 (0:00:38.526) 0:00:57.771 ************ 2025-05-04 00:45:51.772403 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:45:51.772425 | orchestrator | ok: [testbed-manager] 2025-05-04 00:45:51.772438 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:45:51.772451 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:45:51.772464 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:45:51.772476 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:45:51.772489 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:45:51.772502 | orchestrator | 2025-05-04 00:45:51.772516 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-04 00:45:51.772529 | orchestrator | Sunday 04 May 2025 00:44:27 +0000 (0:00:03.015) 0:01:00.787 ************ 2025-05-04 00:45:51.772542 | orchestrator | ok: [testbed-manager] 2025-05-04 00:45:51.772555 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:45:51.772568 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:45:51.772589 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:45:51.772602 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:45:51.772614 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:45:51.772627 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:45:51.772639 | orchestrator | 2025-05-04 00:45:51.772652 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-04 00:45:51.772665 | orchestrator | Sunday 04 May 2025 00:44:28 +0000 (0:00:01.265) 0:01:02.052 ************ 2025-05-04 00:45:51.772677 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:45:51.772698 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:45:51.772710 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:45:51.772723 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:45:51.772735 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:45:51.772748 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:45:51.772760 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:45:51.772773 | orchestrator | 2025-05-04 00:45:51.772785 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-04 00:45:51.772798 | orchestrator | Sunday 04 May 2025 00:44:29 +0000 (0:00:00.948) 0:01:03.001 ************ 2025-05-04 00:45:51.772811 | orchestrator | skipping: [testbed-manager] 2025-05-04 00:45:51.772823 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:45:51.772836 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:45:51.772849 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:45:51.772861 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:45:51.772874 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:45:51.772886 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:45:51.772899 | orchestrator | 2025-05-04 00:45:51.772934 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-04 00:45:51.772947 | orchestrator | Sunday 04 May 2025 00:44:30 +0000 (0:00:00.632) 0:01:03.633 ************ 2025-05-04 00:45:51.772961 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.772975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.772995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.773009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.773030 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.773066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.773115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773128 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.773175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.773320 | orchestrator | 2025-05-04 00:45:51.773333 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-04 00:45:51.773353 | orchestrator | Sunday 04 May 2025 00:44:35 +0000 (0:00:04.669) 0:01:08.302 ************ 2025-05-04 00:45:51.773375 | orchestrator | [WARNING]: Skipped 2025-05-04 00:45:51.773397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-04 00:45:51.773420 | orchestrator | to this access issue: 2025-05-04 00:45:51.773441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-04 00:45:51.773464 | orchestrator | directory 2025-05-04 00:45:51.773480 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 00:45:51.773492 | orchestrator | 2025-05-04 00:45:51.773505 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-04 00:45:51.773518 | orchestrator | Sunday 04 May 2025 00:44:36 +0000 (0:00:00.944) 0:01:09.247 ************ 2025-05-04 00:45:51.773531 | orchestrator | [WARNING]: Skipped 2025-05-04 00:45:51.773544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-04 00:45:51.773556 | orchestrator | to this access issue: 2025-05-04 00:45:51.773569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-04 00:45:51.773582 | orchestrator | directory 2025-05-04 00:45:51.773594 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 00:45:51.773606 | orchestrator | 2025-05-04 00:45:51.773619 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-04 00:45:51.773632 | orchestrator | Sunday 04 May 2025 00:44:36 +0000 (0:00:00.482) 0:01:09.729 ************ 2025-05-04 00:45:51.773644 | orchestrator | [WARNING]: Skipped 2025-05-04 00:45:51.773657 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-04 00:45:51.773670 | orchestrator | to this access issue: 2025-05-04 00:45:51.773683 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-04 00:45:51.773696 | orchestrator | directory 2025-05-04 00:45:51.773708 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 00:45:51.773721 | orchestrator | 2025-05-04 00:45:51.773733 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-04 00:45:51.773746 | orchestrator | Sunday 04 May 2025 00:44:37 +0000 (0:00:00.489) 0:01:10.219 ************ 2025-05-04 00:45:51.773758 | orchestrator | [WARNING]: Skipped 2025-05-04 00:45:51.773771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-04 00:45:51.773783 | orchestrator | to this access issue: 2025-05-04 00:45:51.773796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-04 00:45:51.773809 | orchestrator | directory 2025-05-04 00:45:51.773822 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 00:45:51.773835 | orchestrator | 2025-05-04 00:45:51.773848 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-04 00:45:51.773860 | orchestrator | Sunday 04 May 2025 00:44:37 +0000 (0:00:00.587) 0:01:10.806 ************ 2025-05-04 00:45:51.773873 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.773894 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.773930 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.773953 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.773974 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.773988 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.774001 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.774013 | orchestrator | 2025-05-04 00:45:51.774068 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-04 00:45:51.774082 | orchestrator | Sunday 04 May 2025 00:44:41 +0000 (0:00:03.977) 0:01:14.784 ************ 2025-05-04 00:45:51.774094 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774108 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774121 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774133 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774147 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774159 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774172 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-04 00:45:51.774185 | orchestrator | 2025-05-04 00:45:51.774198 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-04 00:45:51.774210 | orchestrator | Sunday 04 May 2025 00:44:44 +0000 (0:00:02.816) 0:01:17.600 ************ 2025-05-04 00:45:51.774223 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.774236 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.774248 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.774261 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.774274 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.774296 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.774309 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.774322 | orchestrator | 2025-05-04 00:45:51.774335 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-04 00:45:51.774348 | orchestrator | Sunday 04 May 2025 00:44:47 +0000 (0:00:02.987) 0:01:20.588 ************ 2025-05-04 00:45:51.774362 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774386 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774402 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774438 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774458 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774471 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774506 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774544 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774580 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774602 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774674 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774696 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774719 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.774757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:45:51.774786 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.774833 | orchestrator | 2025-05-04 00:45:51.774854 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-04 00:45:51.774876 | orchestrator | Sunday 04 May 2025 00:44:50 +0000 (0:00:03.009) 0:01:23.598 ************ 2025-05-04 00:45:51.774898 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.774955 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.774977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.774999 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.775021 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.775043 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.775066 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-04 00:45:51.775090 | orchestrator | 2025-05-04 00:45:51.775113 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-04 00:45:51.775149 | orchestrator | Sunday 04 May 2025 00:44:53 +0000 (0:00:02.888) 0:01:26.486 ************ 2025-05-04 00:45:51.775171 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775215 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775230 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775244 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775256 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775278 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-04 00:45:51.775290 | orchestrator | 2025-05-04 00:45:51.775303 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-04 00:45:51.775315 | orchestrator | Sunday 04 May 2025 00:44:55 +0000 (0:00:02.477) 0:01:28.963 ************ 2025-05-04 00:45:51.775335 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775376 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775453 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-04 00:45:51.775511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:45:51.775660 | orchestrator | 2025-05-04 00:45:51.775679 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-04 00:45:51.775692 | orchestrator | Sunday 04 May 2025 00:44:59 +0000 (0:00:03.285) 0:01:32.249 ************ 2025-05-04 00:45:51.775705 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.775723 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.775737 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.775749 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.775762 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.775774 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.775786 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.775799 | orchestrator | 2025-05-04 00:45:51.775811 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-04 00:45:51.775839 | orchestrator | Sunday 04 May 2025 00:45:01 +0000 (0:00:01.968) 0:01:34.218 ************ 2025-05-04 00:45:51.775934 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.775951 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.775964 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.775976 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.775994 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.776007 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.776019 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.776032 | orchestrator | 2025-05-04 00:45:51.776045 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776058 | orchestrator | Sunday 04 May 2025 00:45:02 +0000 (0:00:01.586) 0:01:35.805 ************ 2025-05-04 00:45:51.776070 | orchestrator | 2025-05-04 00:45:51.776083 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776096 | orchestrator | Sunday 04 May 2025 00:45:02 +0000 (0:00:00.058) 0:01:35.863 ************ 2025-05-04 00:45:51.776108 | orchestrator | 2025-05-04 00:45:51.776121 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776134 | orchestrator | Sunday 04 May 2025 00:45:02 +0000 (0:00:00.053) 0:01:35.917 ************ 2025-05-04 00:45:51.776146 | orchestrator | 2025-05-04 00:45:51.776159 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776172 | orchestrator | Sunday 04 May 2025 00:45:02 +0000 (0:00:00.054) 0:01:35.971 ************ 2025-05-04 00:45:51.776184 | orchestrator | 2025-05-04 00:45:51.776197 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776209 | orchestrator | Sunday 04 May 2025 00:45:03 +0000 (0:00:00.245) 0:01:36.217 ************ 2025-05-04 00:45:51.776222 | orchestrator | 2025-05-04 00:45:51.776234 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776246 | orchestrator | Sunday 04 May 2025 00:45:03 +0000 (0:00:00.054) 0:01:36.271 ************ 2025-05-04 00:45:51.776259 | orchestrator | 2025-05-04 00:45:51.776271 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-04 00:45:51.776284 | orchestrator | Sunday 04 May 2025 00:45:03 +0000 (0:00:00.049) 0:01:36.321 ************ 2025-05-04 00:45:51.776297 | orchestrator | 2025-05-04 00:45:51.776309 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-04 00:45:51.776322 | orchestrator | Sunday 04 May 2025 00:45:03 +0000 (0:00:00.067) 0:01:36.388 ************ 2025-05-04 00:45:51.776334 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.776346 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.776360 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.776372 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.776385 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.776397 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.776410 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.776422 | orchestrator | 2025-05-04 00:45:51.776435 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-04 00:45:51.776447 | orchestrator | Sunday 04 May 2025 00:45:12 +0000 (0:00:09.009) 0:01:45.398 ************ 2025-05-04 00:45:51.776468 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.776481 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.776494 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.776506 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.776519 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.776531 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.776543 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.776556 | orchestrator | 2025-05-04 00:45:51.776569 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-04 00:45:51.776581 | orchestrator | Sunday 04 May 2025 00:45:38 +0000 (0:00:26.459) 0:02:11.857 ************ 2025-05-04 00:45:51.776594 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:45:51.776607 | orchestrator | ok: [testbed-manager] 2025-05-04 00:45:51.776620 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:45:51.776632 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:45:51.776645 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:45:51.776657 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:45:51.776670 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:45:51.776682 | orchestrator | 2025-05-04 00:45:51.776695 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-04 00:45:51.776708 | orchestrator | Sunday 04 May 2025 00:45:41 +0000 (0:00:02.504) 0:02:14.362 ************ 2025-05-04 00:45:51.776721 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:45:51.776734 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:45:51.776747 | orchestrator | changed: [testbed-manager] 2025-05-04 00:45:51.776759 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:45:51.776771 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:45:51.776784 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:45:51.776796 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:45:51.776809 | orchestrator | 2025-05-04 00:45:51.776822 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:45:51.776835 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:51.776849 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:51.776862 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:51.776883 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:54.838596 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:54.838741 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:54.838762 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 00:45:54.838778 | orchestrator | 2025-05-04 00:45:54.838793 | orchestrator | 2025-05-04 00:45:54.838808 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:45:54.838825 | orchestrator | Sunday 04 May 2025 00:45:51 +0000 (0:00:09.784) 0:02:24.146 ************ 2025-05-04 00:45:54.838843 | orchestrator | =============================================================================== 2025-05-04 00:45:54.838864 | orchestrator | common : Ensure fluentd image is present for label check --------------- 38.53s 2025-05-04 00:45:54.838879 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 26.46s 2025-05-04 00:45:54.838958 | orchestrator | common : Restart cron container ----------------------------------------- 9.78s 2025-05-04 00:45:54.838977 | orchestrator | common : Restart fluentd container -------------------------------------- 9.01s 2025-05-04 00:45:54.839033 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.00s 2025-05-04 00:45:54.839056 | orchestrator | common : Copying over config.json files for services -------------------- 4.67s 2025-05-04 00:45:54.839073 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.01s 2025-05-04 00:45:54.839089 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.98s 2025-05-04 00:45:54.839105 | orchestrator | common : Check common containers ---------------------------------------- 3.29s 2025-05-04 00:45:54.839121 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.19s 2025-05-04 00:45:54.839138 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.02s 2025-05-04 00:45:54.839154 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.01s 2025-05-04 00:45:54.839170 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.99s 2025-05-04 00:45:54.839187 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.89s 2025-05-04 00:45:54.839204 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.82s 2025-05-04 00:45:54.839220 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.50s 2025-05-04 00:45:54.839237 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.48s 2025-05-04 00:45:54.839254 | orchestrator | common : Creating log volume -------------------------------------------- 1.97s 2025-05-04 00:45:54.839270 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.79s 2025-05-04 00:45:54.839288 | orchestrator | common : include_tasks -------------------------------------------------- 1.66s 2025-05-04 00:45:54.839304 | orchestrator | 2025-05-04 00:45:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:54.839321 | orchestrator | 2025-05-04 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:54.839354 | orchestrator | 2025-05-04 00:45:54 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:54.840019 | orchestrator | 2025-05-04 00:45:54 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:45:54.840351 | orchestrator | 2025-05-04 00:45:54 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:45:54.841005 | orchestrator | 2025-05-04 00:45:54 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:45:54.842679 | orchestrator | 2025-05-04 00:45:54 | INFO  | Task 20d2db69-8def-4b43-98e6-d0823cb453ee is in state STARTED 2025-05-04 00:45:54.844232 | orchestrator | 2025-05-04 00:45:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:54.844398 | orchestrator | 2025-05-04 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:45:57.892030 | orchestrator | 2025-05-04 00:45:57 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:45:57.893031 | orchestrator | 2025-05-04 00:45:57 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:45:57.893564 | orchestrator | 2025-05-04 00:45:57 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:45:57.894117 | orchestrator | 2025-05-04 00:45:57 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:45:57.895898 | orchestrator | 2025-05-04 00:45:57 | INFO  | Task 20d2db69-8def-4b43-98e6-d0823cb453ee is in state STARTED 2025-05-04 00:45:57.896328 | orchestrator | 2025-05-04 00:45:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:45:57.896488 | orchestrator | 2025-05-04 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:00.943360 | orchestrator | 2025-05-04 00:46:00 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:00.944387 | orchestrator | 2025-05-04 00:46:00 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:00.947012 | orchestrator | 2025-05-04 00:46:00 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:00.947052 | orchestrator | 2025-05-04 00:46:00 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:00.948405 | orchestrator | 2025-05-04 00:46:00 | INFO  | Task 20d2db69-8def-4b43-98e6-d0823cb453ee is in state STARTED 2025-05-04 00:46:00.951056 | orchestrator | 2025-05-04 00:46:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:03.991353 | orchestrator | 2025-05-04 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:03.991469 | orchestrator | 2025-05-04 00:46:03 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:03.995543 | orchestrator | 2025-05-04 00:46:03 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:03.996780 | orchestrator | 2025-05-04 00:46:03 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:03.998433 | orchestrator | 2025-05-04 00:46:03 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:03.999722 | orchestrator | 2025-05-04 00:46:03 | INFO  | Task 20d2db69-8def-4b43-98e6-d0823cb453ee is in state STARTED 2025-05-04 00:46:04.001900 | orchestrator | 2025-05-04 00:46:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:07.059837 | orchestrator | 2025-05-04 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:07.060007 | orchestrator | 2025-05-04 00:46:07 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:07.060853 | orchestrator | 2025-05-04 00:46:07 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:07.062007 | orchestrator | 2025-05-04 00:46:07 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:07.062945 | orchestrator | 2025-05-04 00:46:07 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:07.063665 | orchestrator | 2025-05-04 00:46:07 | INFO  | Task 20d2db69-8def-4b43-98e6-d0823cb453ee is in state STARTED 2025-05-04 00:46:07.065486 | orchestrator | 2025-05-04 00:46:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:07.066319 | orchestrator | 2025-05-04 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:10.106078 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:10.106192 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:10.107687 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:10.107902 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:10.107929 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task 20d2db69-8def-4b43-98e6-d0823cb453ee is in state SUCCESS 2025-05-04 00:46:10.109865 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:10.110391 | orchestrator | 2025-05-04 00:46:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:10.110814 | orchestrator | 2025-05-04 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:13.155721 | orchestrator | 2025-05-04 00:46:13 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:13.157594 | orchestrator | 2025-05-04 00:46:13 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:13.159864 | orchestrator | 2025-05-04 00:46:13 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:13.162324 | orchestrator | 2025-05-04 00:46:13 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:13.163895 | orchestrator | 2025-05-04 00:46:13 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:13.170993 | orchestrator | 2025-05-04 00:46:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:16.198745 | orchestrator | 2025-05-04 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:16.198944 | orchestrator | 2025-05-04 00:46:16 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:16.200678 | orchestrator | 2025-05-04 00:46:16 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:16.201869 | orchestrator | 2025-05-04 00:46:16 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:16.203494 | orchestrator | 2025-05-04 00:46:16 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:16.204995 | orchestrator | 2025-05-04 00:46:16 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:16.206407 | orchestrator | 2025-05-04 00:46:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:19.247644 | orchestrator | 2025-05-04 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:19.247770 | orchestrator | 2025-05-04 00:46:19 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:19.248127 | orchestrator | 2025-05-04 00:46:19 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:19.249002 | orchestrator | 2025-05-04 00:46:19 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:19.249852 | orchestrator | 2025-05-04 00:46:19 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:19.250849 | orchestrator | 2025-05-04 00:46:19 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:19.251582 | orchestrator | 2025-05-04 00:46:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:19.251725 | orchestrator | 2025-05-04 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:22.294681 | orchestrator | 2025-05-04 00:46:22 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:22.295090 | orchestrator | 2025-05-04 00:46:22 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:22.296117 | orchestrator | 2025-05-04 00:46:22 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:22.297103 | orchestrator | 2025-05-04 00:46:22 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state STARTED 2025-05-04 00:46:22.297769 | orchestrator | 2025-05-04 00:46:22 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:22.298623 | orchestrator | 2025-05-04 00:46:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:22.298849 | orchestrator | 2025-05-04 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:25.337454 | orchestrator | 2025-05-04 00:46:25 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:25.337733 | orchestrator | 2025-05-04 00:46:25 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:25.339444 | orchestrator | 2025-05-04 00:46:25 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:25.341383 | orchestrator | 2025-05-04 00:46:25.342302 | orchestrator | 2025-05-04 00:46:25.342318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:46:25.342324 | orchestrator | 2025-05-04 00:46:25.342330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:46:25.342336 | orchestrator | Sunday 04 May 2025 00:45:56 +0000 (0:00:00.398) 0:00:00.398 ************ 2025-05-04 00:46:25.342341 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:46:25.342348 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:46:25.342353 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:46:25.342358 | orchestrator | 2025-05-04 00:46:25.342364 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:46:25.342369 | orchestrator | Sunday 04 May 2025 00:45:56 +0000 (0:00:00.613) 0:00:01.011 ************ 2025-05-04 00:46:25.342375 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-04 00:46:25.342381 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-04 00:46:25.342386 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-04 00:46:25.342391 | orchestrator | 2025-05-04 00:46:25.342396 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-04 00:46:25.342402 | orchestrator | 2025-05-04 00:46:25.342407 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-04 00:46:25.342412 | orchestrator | Sunday 04 May 2025 00:45:57 +0000 (0:00:00.426) 0:00:01.438 ************ 2025-05-04 00:46:25.342418 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:46:25.342424 | orchestrator | 2025-05-04 00:46:25.342429 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-04 00:46:25.342434 | orchestrator | Sunday 04 May 2025 00:45:58 +0000 (0:00:01.046) 0:00:02.484 ************ 2025-05-04 00:46:25.342439 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-04 00:46:25.342445 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-04 00:46:25.342450 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-04 00:46:25.342456 | orchestrator | 2025-05-04 00:46:25.342461 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-04 00:46:25.342466 | orchestrator | Sunday 04 May 2025 00:45:59 +0000 (0:00:00.850) 0:00:03.335 ************ 2025-05-04 00:46:25.342471 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-04 00:46:25.342477 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-04 00:46:25.342482 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-04 00:46:25.342487 | orchestrator | 2025-05-04 00:46:25.342493 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-04 00:46:25.342498 | orchestrator | Sunday 04 May 2025 00:46:01 +0000 (0:00:02.154) 0:00:05.489 ************ 2025-05-04 00:46:25.342503 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:46:25.342520 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:46:25.342525 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:46:25.342530 | orchestrator | 2025-05-04 00:46:25.342537 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-04 00:46:25.342543 | orchestrator | Sunday 04 May 2025 00:46:03 +0000 (0:00:02.402) 0:00:07.892 ************ 2025-05-04 00:46:25.342548 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:46:25.342553 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:46:25.342570 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:46:25.342576 | orchestrator | 2025-05-04 00:46:25.342581 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:46:25.342587 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:46:25.342593 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:46:25.342598 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:46:25.342604 | orchestrator | 2025-05-04 00:46:25.342609 | orchestrator | 2025-05-04 00:46:25.342614 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:46:25.342620 | orchestrator | Sunday 04 May 2025 00:46:08 +0000 (0:00:04.410) 0:00:12.303 ************ 2025-05-04 00:46:25.342625 | orchestrator | =============================================================================== 2025-05-04 00:46:25.342630 | orchestrator | memcached : Restart memcached container --------------------------------- 4.41s 2025-05-04 00:46:25.342635 | orchestrator | memcached : Check memcached container ----------------------------------- 2.40s 2025-05-04 00:46:25.342641 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.15s 2025-05-04 00:46:25.342646 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.05s 2025-05-04 00:46:25.342651 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.85s 2025-05-04 00:46:25.342656 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2025-05-04 00:46:25.342662 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-05-04 00:46:25.342667 | orchestrator | 2025-05-04 00:46:25.342672 | orchestrator | 2025-05-04 00:46:25 | INFO  | Task 3be4eb23-a8f3-4fac-bb91-911db77b081a is in state SUCCESS 2025-05-04 00:46:25.342681 | orchestrator | 2025-05-04 00:46:25.342687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:46:25.342692 | orchestrator | 2025-05-04 00:46:25.342698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:46:25.342703 | orchestrator | Sunday 04 May 2025 00:45:57 +0000 (0:00:00.347) 0:00:00.347 ************ 2025-05-04 00:46:25.342708 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:46:25.342714 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:46:25.342719 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:46:25.342724 | orchestrator | 2025-05-04 00:46:25.342730 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:46:25.342735 | orchestrator | Sunday 04 May 2025 00:45:57 +0000 (0:00:00.541) 0:00:00.888 ************ 2025-05-04 00:46:25.342740 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-04 00:46:25.342746 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-04 00:46:25.342751 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-04 00:46:25.342757 | orchestrator | 2025-05-04 00:46:25.342762 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-04 00:46:25.342767 | orchestrator | 2025-05-04 00:46:25.342773 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-04 00:46:25.342778 | orchestrator | Sunday 04 May 2025 00:45:58 +0000 (0:00:00.433) 0:00:01.322 ************ 2025-05-04 00:46:25.342783 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:46:25.342789 | orchestrator | 2025-05-04 00:46:25.342794 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-04 00:46:25.342800 | orchestrator | Sunday 04 May 2025 00:45:59 +0000 (0:00:01.029) 0:00:02.351 ************ 2025-05-04 00:46:25.342806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342865 | orchestrator | 2025-05-04 00:46:25.342871 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-04 00:46:25.342876 | orchestrator | Sunday 04 May 2025 00:46:00 +0000 (0:00:01.758) 0:00:04.110 ************ 2025-05-04 00:46:25.342884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342942 | orchestrator | 2025-05-04 00:46:25.342948 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-04 00:46:25.342957 | orchestrator | Sunday 04 May 2025 00:46:03 +0000 (0:00:02.759) 0:00:06.869 ************ 2025-05-04 00:46:25.342963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.342995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343003 | orchestrator | 2025-05-04 00:46:25.343009 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-04 00:46:25.343014 | orchestrator | Sunday 04 May 2025 00:46:07 +0000 (0:00:03.309) 0:00:10.179 ************ 2025-05-04 00:46:25.343020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-04 00:46:25.343089 | orchestrator | 2025-05-04 00:46:25.343096 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-04 00:46:25.343101 | orchestrator | Sunday 04 May 2025 00:46:09 +0000 (0:00:02.176) 0:00:12.355 ************ 2025-05-04 00:46:25.343107 | orchestrator | 2025-05-04 00:46:25.343112 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-04 00:46:25.343121 | orchestrator | Sunday 04 May 2025 00:46:09 +0000 (0:00:00.076) 0:00:12.432 ************ 2025-05-04 00:46:25.343127 | orchestrator | 2025-05-04 00:46:25.343132 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-04 00:46:25.343138 | orchestrator | Sunday 04 May 2025 00:46:09 +0000 (0:00:00.075) 0:00:12.507 ************ 2025-05-04 00:46:25.343143 | orchestrator | 2025-05-04 00:46:25.343149 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-04 00:46:25.343154 | orchestrator | Sunday 04 May 2025 00:46:09 +0000 (0:00:00.186) 0:00:12.693 ************ 2025-05-04 00:46:25.343160 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:46:25.343165 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:46:25.343171 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:46:25.343176 | orchestrator | 2025-05-04 00:46:25.343182 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-04 00:46:25.343187 | orchestrator | Sunday 04 May 2025 00:46:14 +0000 (0:00:04.886) 0:00:17.579 ************ 2025-05-04 00:46:25.343193 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:46:25.343199 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:46:25.343204 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:46:25.343210 | orchestrator | 2025-05-04 00:46:25.343215 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:46:25.343221 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:46:25.343227 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:46:25.343233 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:46:25.343238 | orchestrator | 2025-05-04 00:46:25.343244 | orchestrator | 2025-05-04 00:46:25.343249 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:46:25.343255 | orchestrator | Sunday 04 May 2025 00:46:23 +0000 (0:00:09.049) 0:00:26.629 ************ 2025-05-04 00:46:25.343261 | orchestrator | =============================================================================== 2025-05-04 00:46:25.343266 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.05s 2025-05-04 00:46:25.343272 | orchestrator | redis : Restart redis container ----------------------------------------- 4.89s 2025-05-04 00:46:25.343278 | orchestrator | redis : Copying over redis config files --------------------------------- 3.31s 2025-05-04 00:46:25.343283 | orchestrator | redis : Copying over default config.json files -------------------------- 2.76s 2025-05-04 00:46:25.343289 | orchestrator | redis : Check redis containers ------------------------------------------ 2.18s 2025-05-04 00:46:25.343294 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.76s 2025-05-04 00:46:25.343300 | orchestrator | redis : include_tasks --------------------------------------------------- 1.03s 2025-05-04 00:46:25.343305 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-05-04 00:46:25.343310 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-05-04 00:46:25.343315 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.34s 2025-05-04 00:46:25.343321 | orchestrator | 2025-05-04 00:46:25 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:25.343843 | orchestrator | 2025-05-04 00:46:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:25.344019 | orchestrator | 2025-05-04 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:28.434735 | orchestrator | 2025-05-04 00:46:28 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:28.435320 | orchestrator | 2025-05-04 00:46:28 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:28.436454 | orchestrator | 2025-05-04 00:46:28 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:28.437562 | orchestrator | 2025-05-04 00:46:28 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:28.438423 | orchestrator | 2025-05-04 00:46:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:31.481246 | orchestrator | 2025-05-04 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:31.481372 | orchestrator | 2025-05-04 00:46:31 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:31.483570 | orchestrator | 2025-05-04 00:46:31 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:31.485416 | orchestrator | 2025-05-04 00:46:31 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:31.487016 | orchestrator | 2025-05-04 00:46:31 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:31.490528 | orchestrator | 2025-05-04 00:46:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:31.491180 | orchestrator | 2025-05-04 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:34.534708 | orchestrator | 2025-05-04 00:46:34 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:34.537581 | orchestrator | 2025-05-04 00:46:34 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:34.538322 | orchestrator | 2025-05-04 00:46:34 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:34.540650 | orchestrator | 2025-05-04 00:46:34 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:34.541810 | orchestrator | 2025-05-04 00:46:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:37.579539 | orchestrator | 2025-05-04 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:37.579685 | orchestrator | 2025-05-04 00:46:37 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:37.580116 | orchestrator | 2025-05-04 00:46:37 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:37.580149 | orchestrator | 2025-05-04 00:46:37 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:37.580800 | orchestrator | 2025-05-04 00:46:37 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:37.581503 | orchestrator | 2025-05-04 00:46:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:40.611450 | orchestrator | 2025-05-04 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:40.612311 | orchestrator | 2025-05-04 00:46:40 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:40.612415 | orchestrator | 2025-05-04 00:46:40 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:40.612468 | orchestrator | 2025-05-04 00:46:40 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:40.613865 | orchestrator | 2025-05-04 00:46:40 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:40.614468 | orchestrator | 2025-05-04 00:46:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:43.651384 | orchestrator | 2025-05-04 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:43.651507 | orchestrator | 2025-05-04 00:46:43 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:43.651748 | orchestrator | 2025-05-04 00:46:43 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:43.652395 | orchestrator | 2025-05-04 00:46:43 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:43.658860 | orchestrator | 2025-05-04 00:46:43 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:43.659509 | orchestrator | 2025-05-04 00:46:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:46.701561 | orchestrator | 2025-05-04 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:46.701710 | orchestrator | 2025-05-04 00:46:46 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:46.702909 | orchestrator | 2025-05-04 00:46:46 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:46.702987 | orchestrator | 2025-05-04 00:46:46 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:46.705300 | orchestrator | 2025-05-04 00:46:46 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:46.705969 | orchestrator | 2025-05-04 00:46:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:49.743374 | orchestrator | 2025-05-04 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:49.743546 | orchestrator | 2025-05-04 00:46:49 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:49.746271 | orchestrator | 2025-05-04 00:46:49 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:49.750074 | orchestrator | 2025-05-04 00:46:49 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:49.750759 | orchestrator | 2025-05-04 00:46:49 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:49.751614 | orchestrator | 2025-05-04 00:46:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:52.799026 | orchestrator | 2025-05-04 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:52.799194 | orchestrator | 2025-05-04 00:46:52 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:52.799371 | orchestrator | 2025-05-04 00:46:52 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:52.800776 | orchestrator | 2025-05-04 00:46:52 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:52.802450 | orchestrator | 2025-05-04 00:46:52 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:52.803314 | orchestrator | 2025-05-04 00:46:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:55.853534 | orchestrator | 2025-05-04 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:55.855336 | orchestrator | 2025-05-04 00:46:55 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:55.855556 | orchestrator | 2025-05-04 00:46:55 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:55.855674 | orchestrator | 2025-05-04 00:46:55 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:55.855712 | orchestrator | 2025-05-04 00:46:55 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:58.910298 | orchestrator | 2025-05-04 00:46:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:46:58.910445 | orchestrator | 2025-05-04 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:46:58.910484 | orchestrator | 2025-05-04 00:46:58 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:46:58.911801 | orchestrator | 2025-05-04 00:46:58 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:46:58.911842 | orchestrator | 2025-05-04 00:46:58 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:46:58.912866 | orchestrator | 2025-05-04 00:46:58 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:46:58.914956 | orchestrator | 2025-05-04 00:46:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:01.955703 | orchestrator | 2025-05-04 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:01.955825 | orchestrator | 2025-05-04 00:47:01 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:01.955913 | orchestrator | 2025-05-04 00:47:01 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:01.957273 | orchestrator | 2025-05-04 00:47:01 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:47:01.957858 | orchestrator | 2025-05-04 00:47:01 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:01.959082 | orchestrator | 2025-05-04 00:47:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:05.002104 | orchestrator | 2025-05-04 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:05.002204 | orchestrator | 2025-05-04 00:47:05 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:05.003802 | orchestrator | 2025-05-04 00:47:05 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:05.005390 | orchestrator | 2025-05-04 00:47:05 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:47:05.007161 | orchestrator | 2025-05-04 00:47:05 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:05.008494 | orchestrator | 2025-05-04 00:47:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:05.008607 | orchestrator | 2025-05-04 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:08.051483 | orchestrator | 2025-05-04 00:47:08 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:08.052999 | orchestrator | 2025-05-04 00:47:08 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:08.053578 | orchestrator | 2025-05-04 00:47:08 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state STARTED 2025-05-04 00:47:08.054965 | orchestrator | 2025-05-04 00:47:08 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:08.056797 | orchestrator | 2025-05-04 00:47:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:11.108335 | orchestrator | 2025-05-04 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:11.108491 | orchestrator | 2025-05-04 00:47:11 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:11.108586 | orchestrator | 2025-05-04 00:47:11 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:11.109515 | orchestrator | 2025-05-04 00:47:11 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:11.110562 | orchestrator | 2025-05-04 00:47:11 | INFO  | Task 437574a9-fe3a-420e-917c-34befd2c3e43 is in state SUCCESS 2025-05-04 00:47:11.113187 | orchestrator | 2025-05-04 00:47:11.113260 | orchestrator | 2025-05-04 00:47:11.113286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:47:11.113308 | orchestrator | 2025-05-04 00:47:11.113330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:47:11.113354 | orchestrator | Sunday 04 May 2025 00:45:55 +0000 (0:00:00.608) 0:00:00.608 ************ 2025-05-04 00:47:11.113377 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:47:11.113403 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:47:11.113441 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:47:11.113457 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:47:11.113471 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:47:11.113485 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:47:11.113500 | orchestrator | 2025-05-04 00:47:11.113514 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:47:11.113529 | orchestrator | Sunday 04 May 2025 00:45:57 +0000 (0:00:01.303) 0:00:01.911 ************ 2025-05-04 00:47:11.113543 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-04 00:47:11.113559 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-04 00:47:11.113573 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-04 00:47:11.113587 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-04 00:47:11.113602 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-04 00:47:11.113620 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-04 00:47:11.113635 | orchestrator | 2025-05-04 00:47:11.113649 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-04 00:47:11.113663 | orchestrator | 2025-05-04 00:47:11.113677 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-04 00:47:11.113691 | orchestrator | Sunday 04 May 2025 00:45:58 +0000 (0:00:01.159) 0:00:03.071 ************ 2025-05-04 00:47:11.113706 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:47:11.113722 | orchestrator | 2025-05-04 00:47:11.113736 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-04 00:47:11.113751 | orchestrator | Sunday 04 May 2025 00:46:00 +0000 (0:00:01.920) 0:00:04.991 ************ 2025-05-04 00:47:11.113768 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-04 00:47:11.113784 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-04 00:47:11.113800 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-04 00:47:11.113818 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-04 00:47:11.113844 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-04 00:47:11.113859 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-04 00:47:11.113874 | orchestrator | 2025-05-04 00:47:11.113888 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-04 00:47:11.113926 | orchestrator | Sunday 04 May 2025 00:46:01 +0000 (0:00:01.439) 0:00:06.430 ************ 2025-05-04 00:47:11.113985 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-04 00:47:11.114007 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-04 00:47:11.114110 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-04 00:47:11.114128 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-04 00:47:11.114143 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-04 00:47:11.114157 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-04 00:47:11.114171 | orchestrator | 2025-05-04 00:47:11.114185 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-04 00:47:11.114199 | orchestrator | Sunday 04 May 2025 00:46:03 +0000 (0:00:02.281) 0:00:08.712 ************ 2025-05-04 00:47:11.114214 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-04 00:47:11.114228 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:47:11.114243 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-04 00:47:11.114258 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:47:11.114272 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-04 00:47:11.114287 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:47:11.114301 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-04 00:47:11.114315 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:47:11.114330 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-04 00:47:11.114344 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:47:11.114358 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-04 00:47:11.114373 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:47:11.114387 | orchestrator | 2025-05-04 00:47:11.114401 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-04 00:47:11.114416 | orchestrator | Sunday 04 May 2025 00:46:06 +0000 (0:00:02.476) 0:00:11.189 ************ 2025-05-04 00:47:11.114430 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:47:11.114444 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:47:11.114458 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:47:11.114472 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:47:11.114486 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:47:11.114500 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:47:11.114514 | orchestrator | 2025-05-04 00:47:11.114528 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-04 00:47:11.114543 | orchestrator | Sunday 04 May 2025 00:46:07 +0000 (0:00:00.861) 0:00:12.051 ************ 2025-05-04 00:47:11.114577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114793 | orchestrator | 2025-05-04 00:47:11.114807 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-04 00:47:11.114822 | orchestrator | Sunday 04 May 2025 00:46:09 +0000 (0:00:02.427) 0:00:14.479 ************ 2025-05-04 00:47:11.114837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.114985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115126 | orchestrator | 2025-05-04 00:47:11.115141 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-04 00:47:11.115155 | orchestrator | Sunday 04 May 2025 00:46:13 +0000 (0:00:03.889) 0:00:18.368 ************ 2025-05-04 00:47:11.115170 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:47:11.115184 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:47:11.115198 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:47:11.115212 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:47:11.115226 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:47:11.115240 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:47:11.115254 | orchestrator | 2025-05-04 00:47:11.115269 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-04 00:47:11.115283 | orchestrator | Sunday 04 May 2025 00:46:16 +0000 (0:00:02.855) 0:00:21.223 ************ 2025-05-04 00:47:11.115297 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:47:11.115312 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:47:11.115326 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:47:11.115340 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:47:11.115355 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:47:11.115369 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:47:11.115383 | orchestrator | 2025-05-04 00:47:11.115398 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-04 00:47:11.115412 | orchestrator | Sunday 04 May 2025 00:46:18 +0000 (0:00:02.256) 0:00:23.480 ************ 2025-05-04 00:47:11.115426 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:47:11.115440 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:47:11.115454 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:47:11.115469 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:47:11.115483 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:47:11.115497 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:47:11.115511 | orchestrator | 2025-05-04 00:47:11.115525 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-04 00:47:11.115540 | orchestrator | Sunday 04 May 2025 00:46:19 +0000 (0:00:01.279) 0:00:24.759 ************ 2025-05-04 00:47:11.115554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-04 00:47:11.115787 | orchestrator | 2025-05-04 00:47:11.115802 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-04 00:47:11.115816 | orchestrator | Sunday 04 May 2025 00:46:22 +0000 (0:00:02.815) 0:00:27.574 ************ 2025-05-04 00:47:11.115830 | orchestrator | 2025-05-04 00:47:11.115860 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-04 00:47:11.115886 | orchestrator | Sunday 04 May 2025 00:46:22 +0000 (0:00:00.086) 0:00:27.661 ************ 2025-05-04 00:47:11.115901 | orchestrator | 2025-05-04 00:47:11.115915 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-04 00:47:11.115930 | orchestrator | Sunday 04 May 2025 00:46:22 +0000 (0:00:00.193) 0:00:27.854 ************ 2025-05-04 00:47:11.116001 | orchestrator | 2025-05-04 00:47:11.116017 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-04 00:47:11.116031 | orchestrator | Sunday 04 May 2025 00:46:23 +0000 (0:00:00.100) 0:00:27.955 ************ 2025-05-04 00:47:11.116046 | orchestrator | 2025-05-04 00:47:11.116065 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-04 00:47:11.116080 | orchestrator | Sunday 04 May 2025 00:46:23 +0000 (0:00:00.494) 0:00:28.449 ************ 2025-05-04 00:47:11.116095 | orchestrator | 2025-05-04 00:47:11.116109 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-04 00:47:11.116123 | orchestrator | Sunday 04 May 2025 00:46:23 +0000 (0:00:00.224) 0:00:28.674 ************ 2025-05-04 00:47:11.116137 | orchestrator | 2025-05-04 00:47:11.116151 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-04 00:47:11.116165 | orchestrator | Sunday 04 May 2025 00:46:24 +0000 (0:00:00.372) 0:00:29.046 ************ 2025-05-04 00:47:11.116179 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:47:11.116193 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:47:11.116208 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:47:11.116222 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:47:11.116236 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:47:11.116250 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:47:11.116264 | orchestrator | 2025-05-04 00:47:11.116278 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-04 00:47:11.116292 | orchestrator | Sunday 04 May 2025 00:46:34 +0000 (0:00:10.450) 0:00:39.497 ************ 2025-05-04 00:47:11.116313 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:47:11.116328 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:47:11.116343 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:47:11.116357 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:47:11.116371 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:47:11.116385 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:47:11.116399 | orchestrator | 2025-05-04 00:47:11.116413 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-04 00:47:11.116428 | orchestrator | Sunday 04 May 2025 00:46:36 +0000 (0:00:02.237) 0:00:41.734 ************ 2025-05-04 00:47:11.116442 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:47:11.116456 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:47:11.116470 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:47:11.116485 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:47:11.116499 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:47:11.116514 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:47:11.116536 | orchestrator | 2025-05-04 00:47:11.116551 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-04 00:47:11.116566 | orchestrator | Sunday 04 May 2025 00:46:45 +0000 (0:00:08.670) 0:00:50.404 ************ 2025-05-04 00:47:11.116580 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-04 00:47:11.116594 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-04 00:47:11.116609 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-04 00:47:11.116624 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-04 00:47:11.116638 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-04 00:47:11.116653 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-04 00:47:11.116667 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-04 00:47:11.116682 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-04 00:47:11.116703 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-04 00:47:11.116719 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-04 00:47:11.116733 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-04 00:47:11.116747 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-04 00:47:11.116767 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-04 00:47:11.116782 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-04 00:47:11.116796 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-04 00:47:11.116810 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-04 00:47:11.116825 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-04 00:47:11.116839 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-04 00:47:11.116853 | orchestrator | 2025-05-04 00:47:11.116868 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-04 00:47:11.116883 | orchestrator | Sunday 04 May 2025 00:46:53 +0000 (0:00:08.035) 0:00:58.439 ************ 2025-05-04 00:47:11.116897 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-04 00:47:11.116912 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:47:11.116928 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-04 00:47:11.116962 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:47:11.116978 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-04 00:47:11.116992 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:47:11.117006 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-04 00:47:11.117021 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-04 00:47:11.117035 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-04 00:47:11.117049 | orchestrator | 2025-05-04 00:47:11.117064 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-04 00:47:11.117078 | orchestrator | Sunday 04 May 2025 00:46:56 +0000 (0:00:02.909) 0:01:01.349 ************ 2025-05-04 00:47:11.117092 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-04 00:47:11.117106 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:47:11.117121 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-04 00:47:11.117135 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:47:11.117149 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-04 00:47:11.117163 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:47:11.117177 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-04 00:47:11.117198 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-04 00:47:14.150149 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-04 00:47:14.150288 | orchestrator | 2025-05-04 00:47:14.150310 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-04 00:47:14.150326 | orchestrator | Sunday 04 May 2025 00:47:00 +0000 (0:00:04.481) 0:01:05.830 ************ 2025-05-04 00:47:14.150341 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:47:14.150357 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:47:14.150372 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:47:14.150386 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:47:14.150400 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:47:14.150449 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:47:14.150464 | orchestrator | 2025-05-04 00:47:14.150479 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:47:14.150494 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:47:14.150511 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:47:14.150525 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:47:14.150540 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:47:14.150554 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:47:14.150587 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:47:14.150604 | orchestrator | 2025-05-04 00:47:14.150620 | orchestrator | 2025-05-04 00:47:14.150635 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:47:14.150651 | orchestrator | Sunday 04 May 2025 00:47:08 +0000 (0:00:07.994) 0:01:13.825 ************ 2025-05-04 00:47:14.150667 | orchestrator | =============================================================================== 2025-05-04 00:47:14.150684 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.66s 2025-05-04 00:47:14.150700 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.45s 2025-05-04 00:47:14.150715 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.04s 2025-05-04 00:47:14.150731 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.48s 2025-05-04 00:47:14.150747 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.89s 2025-05-04 00:47:14.150763 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.91s 2025-05-04 00:47:14.150778 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.86s 2025-05-04 00:47:14.150794 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.82s 2025-05-04 00:47:14.150810 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.48s 2025-05-04 00:47:14.150835 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.43s 2025-05-04 00:47:14.150855 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.28s 2025-05-04 00:47:14.150870 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.26s 2025-05-04 00:47:14.150885 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.24s 2025-05-04 00:47:14.150899 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.92s 2025-05-04 00:47:14.150913 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.47s 2025-05-04 00:47:14.150927 | orchestrator | module-load : Load modules ---------------------------------------------- 1.44s 2025-05-04 00:47:14.150976 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.30s 2025-05-04 00:47:14.151003 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.28s 2025-05-04 00:47:14.151027 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2025-05-04 00:47:14.151051 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.86s 2025-05-04 00:47:14.151065 | orchestrator | 2025-05-04 00:47:11 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:14.151090 | orchestrator | 2025-05-04 00:47:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:14.151105 | orchestrator | 2025-05-04 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:14.151141 | orchestrator | 2025-05-04 00:47:14 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:14.151285 | orchestrator | 2025-05-04 00:47:14 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:14.151314 | orchestrator | 2025-05-04 00:47:14 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:14.151329 | orchestrator | 2025-05-04 00:47:14 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:14.151350 | orchestrator | 2025-05-04 00:47:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:17.186921 | orchestrator | 2025-05-04 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:17.187124 | orchestrator | 2025-05-04 00:47:17 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:17.189148 | orchestrator | 2025-05-04 00:47:17 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:17.191211 | orchestrator | 2025-05-04 00:47:17 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:17.192578 | orchestrator | 2025-05-04 00:47:17 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:17.194609 | orchestrator | 2025-05-04 00:47:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:20.247393 | orchestrator | 2025-05-04 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:20.247513 | orchestrator | 2025-05-04 00:47:20 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:20.248046 | orchestrator | 2025-05-04 00:47:20 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:20.249163 | orchestrator | 2025-05-04 00:47:20 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:20.249585 | orchestrator | 2025-05-04 00:47:20 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:20.250591 | orchestrator | 2025-05-04 00:47:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:23.283104 | orchestrator | 2025-05-04 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:23.283205 | orchestrator | 2025-05-04 00:47:23 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:23.285999 | orchestrator | 2025-05-04 00:47:23 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:26.316244 | orchestrator | 2025-05-04 00:47:23 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:26.316385 | orchestrator | 2025-05-04 00:47:23 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:26.316408 | orchestrator | 2025-05-04 00:47:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:26.316424 | orchestrator | 2025-05-04 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:26.316455 | orchestrator | 2025-05-04 00:47:26 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:26.317124 | orchestrator | 2025-05-04 00:47:26 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:26.318750 | orchestrator | 2025-05-04 00:47:26 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:26.320218 | orchestrator | 2025-05-04 00:47:26 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:26.321406 | orchestrator | 2025-05-04 00:47:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:29.356910 | orchestrator | 2025-05-04 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:29.357105 | orchestrator | 2025-05-04 00:47:29 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:29.357378 | orchestrator | 2025-05-04 00:47:29 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:29.359693 | orchestrator | 2025-05-04 00:47:29 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:29.363356 | orchestrator | 2025-05-04 00:47:29 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:29.365753 | orchestrator | 2025-05-04 00:47:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:32.395886 | orchestrator | 2025-05-04 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:32.396050 | orchestrator | 2025-05-04 00:47:32 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:32.396701 | orchestrator | 2025-05-04 00:47:32 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:32.397527 | orchestrator | 2025-05-04 00:47:32 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:32.398489 | orchestrator | 2025-05-04 00:47:32 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:32.400343 | orchestrator | 2025-05-04 00:47:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:35.434515 | orchestrator | 2025-05-04 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:35.434625 | orchestrator | 2025-05-04 00:47:35 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:35.437389 | orchestrator | 2025-05-04 00:47:35 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:35.437403 | orchestrator | 2025-05-04 00:47:35 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:35.440892 | orchestrator | 2025-05-04 00:47:35 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:35.442734 | orchestrator | 2025-05-04 00:47:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:38.490813 | orchestrator | 2025-05-04 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:38.490914 | orchestrator | 2025-05-04 00:47:38 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:38.491354 | orchestrator | 2025-05-04 00:47:38 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:38.492724 | orchestrator | 2025-05-04 00:47:38 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:38.493462 | orchestrator | 2025-05-04 00:47:38 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:38.494393 | orchestrator | 2025-05-04 00:47:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:41.531400 | orchestrator | 2025-05-04 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:41.531563 | orchestrator | 2025-05-04 00:47:41 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:41.532638 | orchestrator | 2025-05-04 00:47:41 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:41.533304 | orchestrator | 2025-05-04 00:47:41 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:41.533332 | orchestrator | 2025-05-04 00:47:41 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:41.534519 | orchestrator | 2025-05-04 00:47:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:44.584885 | orchestrator | 2025-05-04 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:44.585112 | orchestrator | 2025-05-04 00:47:44 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:44.586465 | orchestrator | 2025-05-04 00:47:44 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:44.588074 | orchestrator | 2025-05-04 00:47:44 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:44.589609 | orchestrator | 2025-05-04 00:47:44 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:44.591641 | orchestrator | 2025-05-04 00:47:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:47.642297 | orchestrator | 2025-05-04 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:47.642459 | orchestrator | 2025-05-04 00:47:47 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:47.644877 | orchestrator | 2025-05-04 00:47:47 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:50.689154 | orchestrator | 2025-05-04 00:47:47 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:50.689304 | orchestrator | 2025-05-04 00:47:47 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:50.689326 | orchestrator | 2025-05-04 00:47:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:50.689342 | orchestrator | 2025-05-04 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:50.689377 | orchestrator | 2025-05-04 00:47:50 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:50.690247 | orchestrator | 2025-05-04 00:47:50 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:50.690285 | orchestrator | 2025-05-04 00:47:50 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:50.690801 | orchestrator | 2025-05-04 00:47:50 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:50.691886 | orchestrator | 2025-05-04 00:47:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:53.750211 | orchestrator | 2025-05-04 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:53.750346 | orchestrator | 2025-05-04 00:47:53 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:53.752760 | orchestrator | 2025-05-04 00:47:53 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:53.752818 | orchestrator | 2025-05-04 00:47:53 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:53.752864 | orchestrator | 2025-05-04 00:47:53 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:53.752900 | orchestrator | 2025-05-04 00:47:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:56.786403 | orchestrator | 2025-05-04 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:56.786544 | orchestrator | 2025-05-04 00:47:56 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:56.786814 | orchestrator | 2025-05-04 00:47:56 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:56.790409 | orchestrator | 2025-05-04 00:47:56 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:56.790904 | orchestrator | 2025-05-04 00:47:56 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:56.791585 | orchestrator | 2025-05-04 00:47:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:47:59.837221 | orchestrator | 2025-05-04 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:47:59.837368 | orchestrator | 2025-05-04 00:47:59 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:47:59.838787 | orchestrator | 2025-05-04 00:47:59 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:47:59.840838 | orchestrator | 2025-05-04 00:47:59 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:47:59.842297 | orchestrator | 2025-05-04 00:47:59 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:47:59.844412 | orchestrator | 2025-05-04 00:47:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:02.900324 | orchestrator | 2025-05-04 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:02.901996 | orchestrator | 2025-05-04 00:48:02 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:02.902228 | orchestrator | 2025-05-04 00:48:02 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:02.902250 | orchestrator | 2025-05-04 00:48:02 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:02.902265 | orchestrator | 2025-05-04 00:48:02 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:02.902287 | orchestrator | 2025-05-04 00:48:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:05.938616 | orchestrator | 2025-05-04 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:05.938785 | orchestrator | 2025-05-04 00:48:05 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:05.940100 | orchestrator | 2025-05-04 00:48:05 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:05.940996 | orchestrator | 2025-05-04 00:48:05 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:05.941795 | orchestrator | 2025-05-04 00:48:05 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:05.942717 | orchestrator | 2025-05-04 00:48:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:05.942808 | orchestrator | 2025-05-04 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:08.993164 | orchestrator | 2025-05-04 00:48:08 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:08.995305 | orchestrator | 2025-05-04 00:48:08 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:08.995394 | orchestrator | 2025-05-04 00:48:08 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:08.996158 | orchestrator | 2025-05-04 00:48:08 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:08.998009 | orchestrator | 2025-05-04 00:48:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:12.029486 | orchestrator | 2025-05-04 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:12.029615 | orchestrator | 2025-05-04 00:48:12 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:12.029793 | orchestrator | 2025-05-04 00:48:12 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:12.030780 | orchestrator | 2025-05-04 00:48:12 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:12.033099 | orchestrator | 2025-05-04 00:48:12 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:12.033756 | orchestrator | 2025-05-04 00:48:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:15.084620 | orchestrator | 2025-05-04 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:15.084755 | orchestrator | 2025-05-04 00:48:15 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:15.085022 | orchestrator | 2025-05-04 00:48:15 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:15.086258 | orchestrator | 2025-05-04 00:48:15 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:15.087343 | orchestrator | 2025-05-04 00:48:15 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:15.088699 | orchestrator | 2025-05-04 00:48:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:18.119427 | orchestrator | 2025-05-04 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:18.119554 | orchestrator | 2025-05-04 00:48:18 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:18.124338 | orchestrator | 2025-05-04 00:48:18 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:18.125705 | orchestrator | 2025-05-04 00:48:18 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:18.127066 | orchestrator | 2025-05-04 00:48:18 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:18.128193 | orchestrator | 2025-05-04 00:48:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:18.128290 | orchestrator | 2025-05-04 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:21.167721 | orchestrator | 2025-05-04 00:48:21 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:21.171696 | orchestrator | 2025-05-04 00:48:21 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:21.172468 | orchestrator | 2025-05-04 00:48:21 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:21.172966 | orchestrator | 2025-05-04 00:48:21 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state STARTED 2025-05-04 00:48:21.173630 | orchestrator | 2025-05-04 00:48:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:24.224053 | orchestrator | 2025-05-04 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:24.224217 | orchestrator | 2025-05-04 00:48:24 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:24.224571 | orchestrator | 2025-05-04 00:48:24 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:24.227507 | orchestrator | 2025-05-04 00:48:24 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:24.230568 | orchestrator | 2025-05-04 00:48:24 | INFO  | Task 12f9d9b7-bb85-43d8-ac4c-8aac001320e0 is in state SUCCESS 2025-05-04 00:48:24.232849 | orchestrator | 2025-05-04 00:48:24.232926 | orchestrator | 2025-05-04 00:48:24.233002 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-04 00:48:24.233031 | orchestrator | 2025-05-04 00:48:24.233056 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-04 00:48:24.233072 | orchestrator | Sunday 04 May 2025 00:46:13 +0000 (0:00:00.208) 0:00:00.208 ************ 2025-05-04 00:48:24.233087 | orchestrator | ok: [localhost] => { 2025-05-04 00:48:24.233104 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-04 00:48:24.233119 | orchestrator | } 2025-05-04 00:48:24.233133 | orchestrator | 2025-05-04 00:48:24.233148 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-04 00:48:24.233162 | orchestrator | Sunday 04 May 2025 00:46:13 +0000 (0:00:00.104) 0:00:00.313 ************ 2025-05-04 00:48:24.233178 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-04 00:48:24.233193 | orchestrator | ...ignoring 2025-05-04 00:48:24.233209 | orchestrator | 2025-05-04 00:48:24.233234 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-04 00:48:24.233259 | orchestrator | Sunday 04 May 2025 00:46:16 +0000 (0:00:03.201) 0:00:03.515 ************ 2025-05-04 00:48:24.233283 | orchestrator | skipping: [localhost] 2025-05-04 00:48:24.233307 | orchestrator | 2025-05-04 00:48:24.233331 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-04 00:48:24.233354 | orchestrator | Sunday 04 May 2025 00:46:17 +0000 (0:00:00.097) 0:00:03.612 ************ 2025-05-04 00:48:24.233377 | orchestrator | ok: [localhost] 2025-05-04 00:48:24.233402 | orchestrator | 2025-05-04 00:48:24.233426 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:48:24.233449 | orchestrator | 2025-05-04 00:48:24.233475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:48:24.233499 | orchestrator | Sunday 04 May 2025 00:46:17 +0000 (0:00:00.331) 0:00:03.943 ************ 2025-05-04 00:48:24.233524 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:48:24.233550 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:48:24.233576 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:48:24.233602 | orchestrator | 2025-05-04 00:48:24.233628 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:48:24.233679 | orchestrator | Sunday 04 May 2025 00:46:17 +0000 (0:00:00.511) 0:00:04.454 ************ 2025-05-04 00:48:24.233702 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-04 00:48:24.233725 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-04 00:48:24.233753 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-04 00:48:24.233776 | orchestrator | 2025-05-04 00:48:24.233802 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-04 00:48:24.233828 | orchestrator | 2025-05-04 00:48:24.233854 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-04 00:48:24.233879 | orchestrator | Sunday 04 May 2025 00:46:18 +0000 (0:00:00.363) 0:00:04.818 ************ 2025-05-04 00:48:24.233906 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:48:24.233932 | orchestrator | 2025-05-04 00:48:24.233989 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-04 00:48:24.234015 | orchestrator | Sunday 04 May 2025 00:46:19 +0000 (0:00:01.023) 0:00:05.842 ************ 2025-05-04 00:48:24.234121 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:48:24.234178 | orchestrator | 2025-05-04 00:48:24.234207 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-04 00:48:24.234234 | orchestrator | Sunday 04 May 2025 00:46:20 +0000 (0:00:01.202) 0:00:07.044 ************ 2025-05-04 00:48:24.234262 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.234290 | orchestrator | 2025-05-04 00:48:24.234317 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-04 00:48:24.234366 | orchestrator | Sunday 04 May 2025 00:46:21 +0000 (0:00:00.691) 0:00:07.736 ************ 2025-05-04 00:48:24.234394 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.234421 | orchestrator | 2025-05-04 00:48:24.234449 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-04 00:48:24.234476 | orchestrator | Sunday 04 May 2025 00:46:22 +0000 (0:00:01.132) 0:00:08.869 ************ 2025-05-04 00:48:24.234503 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.234530 | orchestrator | 2025-05-04 00:48:24.234558 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-04 00:48:24.234586 | orchestrator | Sunday 04 May 2025 00:46:22 +0000 (0:00:00.413) 0:00:09.283 ************ 2025-05-04 00:48:24.234611 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.234639 | orchestrator | 2025-05-04 00:48:24.234666 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-04 00:48:24.234693 | orchestrator | Sunday 04 May 2025 00:46:23 +0000 (0:00:00.355) 0:00:09.638 ************ 2025-05-04 00:48:24.234728 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:48:24.234756 | orchestrator | 2025-05-04 00:48:24.234783 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-04 00:48:24.234809 | orchestrator | Sunday 04 May 2025 00:46:24 +0000 (0:00:01.162) 0:00:10.801 ************ 2025-05-04 00:48:24.234837 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:48:24.234864 | orchestrator | 2025-05-04 00:48:24.234891 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-04 00:48:24.234918 | orchestrator | Sunday 04 May 2025 00:46:25 +0000 (0:00:01.035) 0:00:11.836 ************ 2025-05-04 00:48:24.234965 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.234993 | orchestrator | 2025-05-04 00:48:24.235018 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-04 00:48:24.235044 | orchestrator | Sunday 04 May 2025 00:46:25 +0000 (0:00:00.391) 0:00:12.228 ************ 2025-05-04 00:48:24.235069 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.235092 | orchestrator | 2025-05-04 00:48:24.235131 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-04 00:48:24.235158 | orchestrator | Sunday 04 May 2025 00:46:26 +0000 (0:00:00.336) 0:00:12.564 ************ 2025-05-04 00:48:24.235228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.235258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.235300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.235327 | orchestrator | 2025-05-04 00:48:24.235351 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-04 00:48:24.235374 | orchestrator | Sunday 04 May 2025 00:46:27 +0000 (0:00:00.978) 0:00:13.542 ************ 2025-05-04 00:48:24.235405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.235436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.235460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.235475 | orchestrator | 2025-05-04 00:48:24.235490 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-04 00:48:24.235505 | orchestrator | Sunday 04 May 2025 00:46:28 +0000 (0:00:01.468) 0:00:15.011 ************ 2025-05-04 00:48:24.235519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-04 00:48:24.235534 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-04 00:48:24.235549 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-04 00:48:24.235563 | orchestrator | 2025-05-04 00:48:24.235578 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-04 00:48:24.235593 | orchestrator | Sunday 04 May 2025 00:46:30 +0000 (0:00:01.555) 0:00:16.567 ************ 2025-05-04 00:48:24.235607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-04 00:48:24.235621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-04 00:48:24.235636 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-04 00:48:24.235650 | orchestrator | 2025-05-04 00:48:24.235664 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-04 00:48:24.235679 | orchestrator | Sunday 04 May 2025 00:46:31 +0000 (0:00:01.761) 0:00:18.328 ************ 2025-05-04 00:48:24.235693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-04 00:48:24.235707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-04 00:48:24.235721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-04 00:48:24.235735 | orchestrator | 2025-05-04 00:48:24.235765 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-04 00:48:24.235792 | orchestrator | Sunday 04 May 2025 00:46:33 +0000 (0:00:01.470) 0:00:19.798 ************ 2025-05-04 00:48:24.235848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-04 00:48:24.235865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-04 00:48:24.235880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-04 00:48:24.235912 | orchestrator | 2025-05-04 00:48:24.235938 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-04 00:48:24.236012 | orchestrator | Sunday 04 May 2025 00:46:36 +0000 (0:00:02.749) 0:00:22.548 ************ 2025-05-04 00:48:24.236028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-04 00:48:24.236043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-04 00:48:24.236058 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-04 00:48:24.236072 | orchestrator | 2025-05-04 00:48:24.236087 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-04 00:48:24.236108 | orchestrator | Sunday 04 May 2025 00:46:38 +0000 (0:00:02.324) 0:00:24.872 ************ 2025-05-04 00:48:24.236124 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-04 00:48:24.236139 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-04 00:48:24.236153 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-04 00:48:24.236169 | orchestrator | 2025-05-04 00:48:24.236194 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-04 00:48:24.236228 | orchestrator | Sunday 04 May 2025 00:46:40 +0000 (0:00:01.991) 0:00:26.863 ************ 2025-05-04 00:48:24.236255 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:48:24.236281 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.236308 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:48:24.236334 | orchestrator | 2025-05-04 00:48:24.236361 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-04 00:48:24.236388 | orchestrator | Sunday 04 May 2025 00:46:40 +0000 (0:00:00.587) 0:00:27.451 ************ 2025-05-04 00:48:24.236416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.236563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.236627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:48:24.236656 | orchestrator | 2025-05-04 00:48:24.236685 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-04 00:48:24.236739 | orchestrator | Sunday 04 May 2025 00:46:42 +0000 (0:00:01.239) 0:00:28.691 ************ 2025-05-04 00:48:24.236767 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:48:24.236793 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:48:24.236819 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:48:24.236845 | orchestrator | 2025-05-04 00:48:24.236871 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-04 00:48:24.236896 | orchestrator | Sunday 04 May 2025 00:46:43 +0000 (0:00:00.911) 0:00:29.602 ************ 2025-05-04 00:48:24.236921 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:48:24.237019 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:48:24.237053 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:48:24.237080 | orchestrator | 2025-05-04 00:48:24.237106 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-04 00:48:24.237131 | orchestrator | Sunday 04 May 2025 00:46:48 +0000 (0:00:05.413) 0:00:35.016 ************ 2025-05-04 00:48:24.237157 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:48:24.237183 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:48:24.237208 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:48:24.237234 | orchestrator | 2025-05-04 00:48:24.237259 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-04 00:48:24.237285 | orchestrator | 2025-05-04 00:48:24.237311 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-04 00:48:24.237337 | orchestrator | Sunday 04 May 2025 00:46:48 +0000 (0:00:00.448) 0:00:35.465 ************ 2025-05-04 00:48:24.237362 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:48:24.237389 | orchestrator | 2025-05-04 00:48:24.237414 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-04 00:48:24.237437 | orchestrator | Sunday 04 May 2025 00:46:49 +0000 (0:00:00.779) 0:00:36.244 ************ 2025-05-04 00:48:24.237459 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:48:24.237483 | orchestrator | 2025-05-04 00:48:24.237506 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-04 00:48:24.237529 | orchestrator | Sunday 04 May 2025 00:46:49 +0000 (0:00:00.236) 0:00:36.481 ************ 2025-05-04 00:48:24.237551 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:48:24.237575 | orchestrator | 2025-05-04 00:48:24.237597 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-04 00:48:24.237620 | orchestrator | Sunday 04 May 2025 00:46:51 +0000 (0:00:01.720) 0:00:38.201 ************ 2025-05-04 00:48:24.237643 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:48:24.237666 | orchestrator | 2025-05-04 00:48:24.237688 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-04 00:48:24.237710 | orchestrator | 2025-05-04 00:48:24.237746 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-04 00:48:24.237769 | orchestrator | Sunday 04 May 2025 00:47:44 +0000 (0:00:53.272) 0:01:31.474 ************ 2025-05-04 00:48:24.237792 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:48:24.237815 | orchestrator | 2025-05-04 00:48:24.237838 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-04 00:48:24.237860 | orchestrator | Sunday 04 May 2025 00:47:45 +0000 (0:00:00.594) 0:01:32.069 ************ 2025-05-04 00:48:24.237882 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:48:24.237905 | orchestrator | 2025-05-04 00:48:24.237928 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-04 00:48:24.237975 | orchestrator | Sunday 04 May 2025 00:47:45 +0000 (0:00:00.259) 0:01:32.328 ************ 2025-05-04 00:48:24.237998 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:48:24.238061 | orchestrator | 2025-05-04 00:48:24.238088 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-04 00:48:24.238108 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:01.642) 0:01:33.970 ************ 2025-05-04 00:48:24.238129 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:48:24.238150 | orchestrator | 2025-05-04 00:48:24.238173 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-04 00:48:24.238195 | orchestrator | 2025-05-04 00:48:24.238217 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-04 00:48:24.238253 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:14.715) 0:01:48.686 ************ 2025-05-04 00:48:24.238267 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:48:24.238280 | orchestrator | 2025-05-04 00:48:24.238301 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-04 00:48:24.238314 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:00.577) 0:01:49.264 ************ 2025-05-04 00:48:24.238327 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:48:24.238340 | orchestrator | 2025-05-04 00:48:24.238353 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-04 00:48:24.238378 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:00.200) 0:01:49.464 ************ 2025-05-04 00:48:24.238391 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:48:24.238404 | orchestrator | 2025-05-04 00:48:24.238417 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-04 00:48:24.238430 | orchestrator | Sunday 04 May 2025 00:48:05 +0000 (0:00:02.267) 0:01:51.731 ************ 2025-05-04 00:48:24.238442 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:48:24.238455 | orchestrator | 2025-05-04 00:48:24.238468 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-04 00:48:24.238480 | orchestrator | 2025-05-04 00:48:24.238493 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-04 00:48:24.238505 | orchestrator | Sunday 04 May 2025 00:48:19 +0000 (0:00:14.355) 0:02:06.086 ************ 2025-05-04 00:48:24.238518 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:48:24.238531 | orchestrator | 2025-05-04 00:48:24.238543 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-04 00:48:24.238556 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.469) 0:02:06.556 ************ 2025-05-04 00:48:24.238569 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-04 00:48:24.238581 | orchestrator | enable_outward_rabbitmq_True 2025-05-04 00:48:24.238600 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-04 00:48:24.238613 | orchestrator | outward_rabbitmq_restart 2025-05-04 00:48:24.238626 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:48:24.238639 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:48:24.238651 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:48:24.238663 | orchestrator | 2025-05-04 00:48:24.238676 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-04 00:48:24.238689 | orchestrator | skipping: no hosts matched 2025-05-04 00:48:24.238710 | orchestrator | 2025-05-04 00:48:24.238723 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-04 00:48:24.238736 | orchestrator | skipping: no hosts matched 2025-05-04 00:48:24.238748 | orchestrator | 2025-05-04 00:48:24.238761 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-04 00:48:24.238774 | orchestrator | skipping: no hosts matched 2025-05-04 00:48:24.238786 | orchestrator | 2025-05-04 00:48:24.238799 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:48:24.238812 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-04 00:48:24.238825 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-04 00:48:24.238839 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:48:24.238861 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 00:48:24.238882 | orchestrator | 2025-05-04 00:48:24.238903 | orchestrator | 2025-05-04 00:48:24.238926 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:48:24.238970 | orchestrator | Sunday 04 May 2025 00:48:22 +0000 (0:00:02.618) 0:02:09.175 ************ 2025-05-04 00:48:24.238987 | orchestrator | =============================================================================== 2025-05-04 00:48:24.239000 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.34s 2025-05-04 00:48:24.239012 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.63s 2025-05-04 00:48:24.239025 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.41s 2025-05-04 00:48:24.239038 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.20s 2025-05-04 00:48:24.239050 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.75s 2025-05-04 00:48:24.239063 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.62s 2025-05-04 00:48:24.239075 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.32s 2025-05-04 00:48:24.239088 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.99s 2025-05-04 00:48:24.239101 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2025-05-04 00:48:24.239113 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.76s 2025-05-04 00:48:24.239126 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.56s 2025-05-04 00:48:24.239139 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2025-05-04 00:48:24.239152 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.47s 2025-05-04 00:48:24.239170 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.24s 2025-05-04 00:48:24.239184 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.20s 2025-05-04 00:48:24.239196 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.16s 2025-05-04 00:48:24.239209 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.13s 2025-05-04 00:48:24.239222 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.04s 2025-05-04 00:48:24.239234 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.02s 2025-05-04 00:48:24.239247 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.98s 2025-05-04 00:48:24.239267 | orchestrator | 2025-05-04 00:48:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:24.239422 | orchestrator | 2025-05-04 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:27.275098 | orchestrator | 2025-05-04 00:48:27 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:27.275335 | orchestrator | 2025-05-04 00:48:27 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:27.276897 | orchestrator | 2025-05-04 00:48:27 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:27.277652 | orchestrator | 2025-05-04 00:48:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:30.324476 | orchestrator | 2025-05-04 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:30.324645 | orchestrator | 2025-05-04 00:48:30 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:30.324880 | orchestrator | 2025-05-04 00:48:30 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:30.324908 | orchestrator | 2025-05-04 00:48:30 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:30.324930 | orchestrator | 2025-05-04 00:48:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:33.376713 | orchestrator | 2025-05-04 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:33.379695 | orchestrator | 2025-05-04 00:48:33 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:36.425020 | orchestrator | 2025-05-04 00:48:33 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:36.425163 | orchestrator | 2025-05-04 00:48:33 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:36.425184 | orchestrator | 2025-05-04 00:48:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:36.425201 | orchestrator | 2025-05-04 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:36.425233 | orchestrator | 2025-05-04 00:48:36 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:36.425450 | orchestrator | 2025-05-04 00:48:36 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:36.425484 | orchestrator | 2025-05-04 00:48:36 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:36.426339 | orchestrator | 2025-05-04 00:48:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:39.460556 | orchestrator | 2025-05-04 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:39.460715 | orchestrator | 2025-05-04 00:48:39 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:39.461214 | orchestrator | 2025-05-04 00:48:39 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:39.464290 | orchestrator | 2025-05-04 00:48:39 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:39.468319 | orchestrator | 2025-05-04 00:48:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:42.511438 | orchestrator | 2025-05-04 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:42.511596 | orchestrator | 2025-05-04 00:48:42 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:42.513702 | orchestrator | 2025-05-04 00:48:42 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:42.514466 | orchestrator | 2025-05-04 00:48:42 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:42.517557 | orchestrator | 2025-05-04 00:48:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:45.568941 | orchestrator | 2025-05-04 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:45.569146 | orchestrator | 2025-05-04 00:48:45 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:45.573565 | orchestrator | 2025-05-04 00:48:45 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:45.577219 | orchestrator | 2025-05-04 00:48:45 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:45.581473 | orchestrator | 2025-05-04 00:48:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:48.627233 | orchestrator | 2025-05-04 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:48.627392 | orchestrator | 2025-05-04 00:48:48 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:48.627551 | orchestrator | 2025-05-04 00:48:48 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:48.628156 | orchestrator | 2025-05-04 00:48:48 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:48.628706 | orchestrator | 2025-05-04 00:48:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:51.668022 | orchestrator | 2025-05-04 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:51.668187 | orchestrator | 2025-05-04 00:48:51 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:51.669491 | orchestrator | 2025-05-04 00:48:51 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:51.674919 | orchestrator | 2025-05-04 00:48:51 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:51.675500 | orchestrator | 2025-05-04 00:48:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:54.725922 | orchestrator | 2025-05-04 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:54.726300 | orchestrator | 2025-05-04 00:48:54 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:54.728756 | orchestrator | 2025-05-04 00:48:54 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:54.730103 | orchestrator | 2025-05-04 00:48:54 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:54.731399 | orchestrator | 2025-05-04 00:48:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:57.772534 | orchestrator | 2025-05-04 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:48:57.772695 | orchestrator | 2025-05-04 00:48:57 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:48:57.774478 | orchestrator | 2025-05-04 00:48:57 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:48:57.776439 | orchestrator | 2025-05-04 00:48:57 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:48:57.777810 | orchestrator | 2025-05-04 00:48:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:48:57.777885 | orchestrator | 2025-05-04 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:00.814008 | orchestrator | 2025-05-04 00:49:00 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:00.816606 | orchestrator | 2025-05-04 00:49:00 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:00.820199 | orchestrator | 2025-05-04 00:49:00 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:00.823402 | orchestrator | 2025-05-04 00:49:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:03.869644 | orchestrator | 2025-05-04 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:03.869827 | orchestrator | 2025-05-04 00:49:03 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:03.871476 | orchestrator | 2025-05-04 00:49:03 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:03.872785 | orchestrator | 2025-05-04 00:49:03 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:03.875990 | orchestrator | 2025-05-04 00:49:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:03.876151 | orchestrator | 2025-05-04 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:06.922805 | orchestrator | 2025-05-04 00:49:06 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:06.923477 | orchestrator | 2025-05-04 00:49:06 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:06.923523 | orchestrator | 2025-05-04 00:49:06 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:06.924116 | orchestrator | 2025-05-04 00:49:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:09.974187 | orchestrator | 2025-05-04 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:09.974344 | orchestrator | 2025-05-04 00:49:09 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:09.975091 | orchestrator | 2025-05-04 00:49:09 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:09.975811 | orchestrator | 2025-05-04 00:49:09 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:09.975847 | orchestrator | 2025-05-04 00:49:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:13.030257 | orchestrator | 2025-05-04 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:13.030543 | orchestrator | 2025-05-04 00:49:13 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:13.033912 | orchestrator | 2025-05-04 00:49:13 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:13.034519 | orchestrator | 2025-05-04 00:49:13 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:13.036071 | orchestrator | 2025-05-04 00:49:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:13.036670 | orchestrator | 2025-05-04 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:16.096713 | orchestrator | 2025-05-04 00:49:16 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:16.099090 | orchestrator | 2025-05-04 00:49:16 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:16.099552 | orchestrator | 2025-05-04 00:49:16 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:16.101041 | orchestrator | 2025-05-04 00:49:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:19.138319 | orchestrator | 2025-05-04 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:19.138476 | orchestrator | 2025-05-04 00:49:19 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:19.139903 | orchestrator | 2025-05-04 00:49:19 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:19.140806 | orchestrator | 2025-05-04 00:49:19 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:19.143707 | orchestrator | 2025-05-04 00:49:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:22.201028 | orchestrator | 2025-05-04 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:22.201169 | orchestrator | 2025-05-04 00:49:22 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:22.202272 | orchestrator | 2025-05-04 00:49:22 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:22.203272 | orchestrator | 2025-05-04 00:49:22 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:22.204512 | orchestrator | 2025-05-04 00:49:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:25.268273 | orchestrator | 2025-05-04 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:25.268436 | orchestrator | 2025-05-04 00:49:25 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:25.269668 | orchestrator | 2025-05-04 00:49:25 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:25.270569 | orchestrator | 2025-05-04 00:49:25 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:25.272026 | orchestrator | 2025-05-04 00:49:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:28.313271 | orchestrator | 2025-05-04 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:28.313494 | orchestrator | 2025-05-04 00:49:28 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:28.313620 | orchestrator | 2025-05-04 00:49:28 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state STARTED 2025-05-04 00:49:28.314807 | orchestrator | 2025-05-04 00:49:28 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:31.352382 | orchestrator | 2025-05-04 00:49:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:31.352515 | orchestrator | 2025-05-04 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:31.352552 | orchestrator | 2025-05-04 00:49:31 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:31.355517 | orchestrator | 2025-05-04 00:49:31.355566 | orchestrator | 2025-05-04 00:49:31.355583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:49:31.355598 | orchestrator | 2025-05-04 00:49:31.355612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:49:31.355627 | orchestrator | Sunday 04 May 2025 00:47:12 +0000 (0:00:00.263) 0:00:00.263 ************ 2025-05-04 00:49:31.355641 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.355656 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.355670 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.355684 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:49:31.355750 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:49:31.355768 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:49:31.355783 | orchestrator | 2025-05-04 00:49:31.355797 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:49:31.355811 | orchestrator | Sunday 04 May 2025 00:47:13 +0000 (0:00:00.797) 0:00:01.060 ************ 2025-05-04 00:49:31.355890 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-04 00:49:31.355908 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-04 00:49:31.355922 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-04 00:49:31.355994 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-04 00:49:31.356039 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-04 00:49:31.356054 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-04 00:49:31.356068 | orchestrator | 2025-05-04 00:49:31.356082 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-04 00:49:31.356097 | orchestrator | 2025-05-04 00:49:31.356114 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-04 00:49:31.356131 | orchestrator | Sunday 04 May 2025 00:47:14 +0000 (0:00:01.258) 0:00:02.319 ************ 2025-05-04 00:49:31.356147 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:49:31.356164 | orchestrator | 2025-05-04 00:49:31.356179 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-04 00:49:31.356195 | orchestrator | Sunday 04 May 2025 00:47:16 +0000 (0:00:01.526) 0:00:03.846 ************ 2025-05-04 00:49:31.356211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356358 | orchestrator | 2025-05-04 00:49:31.356374 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-04 00:49:31.356391 | orchestrator | Sunday 04 May 2025 00:47:18 +0000 (0:00:01.764) 0:00:05.610 ************ 2025-05-04 00:49:31.356420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356511 | orchestrator | 2025-05-04 00:49:31.356525 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-04 00:49:31.356539 | orchestrator | Sunday 04 May 2025 00:47:20 +0000 (0:00:02.405) 0:00:08.016 ************ 2025-05-04 00:49:31.356553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356667 | orchestrator | 2025-05-04 00:49:31.356681 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-04 00:49:31.356696 | orchestrator | Sunday 04 May 2025 00:47:21 +0000 (0:00:01.237) 0:00:09.253 ************ 2025-05-04 00:49:31.356710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356815 | orchestrator | 2025-05-04 00:49:31.356829 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-04 00:49:31.356843 | orchestrator | Sunday 04 May 2025 00:47:23 +0000 (0:00:02.130) 0:00:11.383 ************ 2025-05-04 00:49:31.356857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356950 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.356972 | orchestrator | 2025-05-04 00:49:31.356986 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-04 00:49:31.357001 | orchestrator | Sunday 04 May 2025 00:47:25 +0000 (0:00:01.619) 0:00:13.003 ************ 2025-05-04 00:49:31.357015 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.357030 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.357044 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.357058 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:49:31.357072 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:49:31.357086 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:49:31.357101 | orchestrator | 2025-05-04 00:49:31.357116 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-04 00:49:31.357144 | orchestrator | Sunday 04 May 2025 00:47:28 +0000 (0:00:02.770) 0:00:15.773 ************ 2025-05-04 00:49:31.357169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-04 00:49:31.357192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-04 00:49:31.357217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-04 00:49:31.357254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-04 00:49:31.357283 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-04 00:49:31.357308 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-04 00:49:31.357324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-04 00:49:31.357337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-04 00:49:31.357352 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-04 00:49:31.357372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-04 00:49:31.357387 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-04 00:49:31.357401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-04 00:49:31.357416 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-04 00:49:31.357431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-04 00:49:31.357445 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-04 00:49:31.357459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-04 00:49:31.357473 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-04 00:49:31.357488 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-04 00:49:31.357507 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-04 00:49:31.357522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-04 00:49:31.357536 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-04 00:49:31.357551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-04 00:49:31.357573 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-04 00:49:31.357587 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-04 00:49:31.357601 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-04 00:49:31.357615 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-04 00:49:31.357629 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-04 00:49:31.357643 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-04 00:49:31.357657 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-04 00:49:31.357671 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-04 00:49:31.357685 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-04 00:49:31.357699 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-04 00:49:31.357714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-04 00:49:31.357727 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-04 00:49:31.357742 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-04 00:49:31.357756 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-04 00:49:31.357770 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-04 00:49:31.357784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-04 00:49:31.357798 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-04 00:49:31.357812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-04 00:49:31.357832 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-04 00:49:31.357847 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-04 00:49:31.357861 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-04 00:49:31.357876 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-04 00:49:31.357891 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-04 00:49:31.357905 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-04 00:49:31.357919 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-04 00:49:31.357951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-04 00:49:31.357966 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-04 00:49:31.357981 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-04 00:49:31.358004 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-04 00:49:31.358061 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-04 00:49:31.358078 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-04 00:49:31.358093 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-04 00:49:31.358108 | orchestrator | 2025-05-04 00:49:31.358122 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-04 00:49:31.358137 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:19.126) 0:00:34.900 ************ 2025-05-04 00:49:31.358151 | orchestrator | 2025-05-04 00:49:31.358165 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-04 00:49:31.358179 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.054) 0:00:34.955 ************ 2025-05-04 00:49:31.358194 | orchestrator | 2025-05-04 00:49:31.358207 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-04 00:49:31.358221 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.254) 0:00:35.209 ************ 2025-05-04 00:49:31.358236 | orchestrator | 2025-05-04 00:49:31.358250 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-04 00:49:31.358263 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.055) 0:00:35.265 ************ 2025-05-04 00:49:31.358278 | orchestrator | 2025-05-04 00:49:31.358292 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-04 00:49:31.358306 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.053) 0:00:35.318 ************ 2025-05-04 00:49:31.358320 | orchestrator | 2025-05-04 00:49:31.358334 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-04 00:49:31.358348 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.072) 0:00:35.390 ************ 2025-05-04 00:49:31.358362 | orchestrator | 2025-05-04 00:49:31.358376 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-04 00:49:31.358391 | orchestrator | Sunday 04 May 2025 00:47:48 +0000 (0:00:00.295) 0:00:35.686 ************ 2025-05-04 00:49:31.358405 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:49:31.358419 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.358433 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.358447 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:49:31.358461 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.358476 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:49:31.358490 | orchestrator | 2025-05-04 00:49:31.358504 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-04 00:49:31.358518 | orchestrator | Sunday 04 May 2025 00:47:50 +0000 (0:00:01.973) 0:00:37.660 ************ 2025-05-04 00:49:31.358533 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.358547 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.358560 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.358574 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:49:31.358588 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:49:31.358602 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:49:31.358616 | orchestrator | 2025-05-04 00:49:31.358631 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-04 00:49:31.358645 | orchestrator | 2025-05-04 00:49:31.358660 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-04 00:49:31.358674 | orchestrator | Sunday 04 May 2025 00:48:15 +0000 (0:00:25.623) 0:01:03.283 ************ 2025-05-04 00:49:31.358688 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:49:31.358702 | orchestrator | 2025-05-04 00:49:31.358716 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-04 00:49:31.358730 | orchestrator | Sunday 04 May 2025 00:48:16 +0000 (0:00:00.496) 0:01:03.780 ************ 2025-05-04 00:49:31.358752 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:49:31.358766 | orchestrator | 2025-05-04 00:49:31.358786 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-04 00:49:31.358806 | orchestrator | Sunday 04 May 2025 00:48:16 +0000 (0:00:00.552) 0:01:04.333 ************ 2025-05-04 00:49:31.358820 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.358835 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.358849 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.358863 | orchestrator | 2025-05-04 00:49:31.358877 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-04 00:49:31.358891 | orchestrator | Sunday 04 May 2025 00:48:17 +0000 (0:00:00.790) 0:01:05.123 ************ 2025-05-04 00:49:31.358905 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.358919 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.358948 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.358963 | orchestrator | 2025-05-04 00:49:31.358977 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-04 00:49:31.358992 | orchestrator | Sunday 04 May 2025 00:48:17 +0000 (0:00:00.238) 0:01:05.362 ************ 2025-05-04 00:49:31.359006 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.359020 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.359034 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.359048 | orchestrator | 2025-05-04 00:49:31.359062 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-04 00:49:31.359077 | orchestrator | Sunday 04 May 2025 00:48:18 +0000 (0:00:00.390) 0:01:05.752 ************ 2025-05-04 00:49:31.359091 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.359105 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.359119 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.359133 | orchestrator | 2025-05-04 00:49:31.359147 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-04 00:49:31.359162 | orchestrator | Sunday 04 May 2025 00:48:18 +0000 (0:00:00.364) 0:01:06.116 ************ 2025-05-04 00:49:31.359176 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.359189 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.359203 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.359217 | orchestrator | 2025-05-04 00:49:31.359231 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-04 00:49:31.359245 | orchestrator | Sunday 04 May 2025 00:48:18 +0000 (0:00:00.258) 0:01:06.375 ************ 2025-05-04 00:49:31.359259 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359274 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359288 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359303 | orchestrator | 2025-05-04 00:49:31.359317 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-04 00:49:31.359331 | orchestrator | Sunday 04 May 2025 00:48:19 +0000 (0:00:00.357) 0:01:06.732 ************ 2025-05-04 00:49:31.359345 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359359 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359386 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359400 | orchestrator | 2025-05-04 00:49:31.359414 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-04 00:49:31.359428 | orchestrator | Sunday 04 May 2025 00:48:19 +0000 (0:00:00.397) 0:01:07.129 ************ 2025-05-04 00:49:31.359442 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359457 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359471 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359485 | orchestrator | 2025-05-04 00:49:31.359499 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-04 00:49:31.359513 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.413) 0:01:07.543 ************ 2025-05-04 00:49:31.359527 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359541 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359561 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359575 | orchestrator | 2025-05-04 00:49:31.359589 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-04 00:49:31.359604 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.293) 0:01:07.836 ************ 2025-05-04 00:49:31.359618 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359644 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359658 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359672 | orchestrator | 2025-05-04 00:49:31.359687 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-04 00:49:31.359701 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.332) 0:01:08.169 ************ 2025-05-04 00:49:31.359715 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359729 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359743 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359757 | orchestrator | 2025-05-04 00:49:31.359772 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-04 00:49:31.359786 | orchestrator | Sunday 04 May 2025 00:48:21 +0000 (0:00:00.320) 0:01:08.489 ************ 2025-05-04 00:49:31.359800 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359814 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359828 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359842 | orchestrator | 2025-05-04 00:49:31.359856 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-04 00:49:31.359870 | orchestrator | Sunday 04 May 2025 00:48:21 +0000 (0:00:00.404) 0:01:08.893 ************ 2025-05-04 00:49:31.359885 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359899 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.359913 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.359940 | orchestrator | 2025-05-04 00:49:31.359955 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-04 00:49:31.359969 | orchestrator | Sunday 04 May 2025 00:48:21 +0000 (0:00:00.304) 0:01:09.198 ************ 2025-05-04 00:49:31.359983 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.359997 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360011 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360025 | orchestrator | 2025-05-04 00:49:31.360039 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-04 00:49:31.360053 | orchestrator | Sunday 04 May 2025 00:48:22 +0000 (0:00:01.070) 0:01:10.269 ************ 2025-05-04 00:49:31.360067 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360082 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360096 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360109 | orchestrator | 2025-05-04 00:49:31.360130 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-04 00:49:31.360145 | orchestrator | Sunday 04 May 2025 00:48:23 +0000 (0:00:00.669) 0:01:10.939 ************ 2025-05-04 00:49:31.360159 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360173 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360187 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360200 | orchestrator | 2025-05-04 00:49:31.360214 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-04 00:49:31.360233 | orchestrator | Sunday 04 May 2025 00:48:24 +0000 (0:00:00.691) 0:01:11.630 ************ 2025-05-04 00:49:31.360248 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360262 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360276 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360290 | orchestrator | 2025-05-04 00:49:31.360304 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-04 00:49:31.360318 | orchestrator | Sunday 04 May 2025 00:48:24 +0000 (0:00:00.473) 0:01:12.103 ************ 2025-05-04 00:49:31.360333 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:49:31.360353 | orchestrator | 2025-05-04 00:49:31.360368 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-04 00:49:31.360382 | orchestrator | Sunday 04 May 2025 00:48:25 +0000 (0:00:00.925) 0:01:13.029 ************ 2025-05-04 00:49:31.360396 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.360410 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.360424 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.360438 | orchestrator | 2025-05-04 00:49:31.360452 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-04 00:49:31.360466 | orchestrator | Sunday 04 May 2025 00:48:26 +0000 (0:00:00.599) 0:01:13.628 ************ 2025-05-04 00:49:31.360480 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.360495 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.360509 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.360523 | orchestrator | 2025-05-04 00:49:31.360537 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-04 00:49:31.360551 | orchestrator | Sunday 04 May 2025 00:48:26 +0000 (0:00:00.622) 0:01:14.250 ************ 2025-05-04 00:49:31.360565 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360579 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360593 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360607 | orchestrator | 2025-05-04 00:49:31.360621 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-04 00:49:31.360635 | orchestrator | Sunday 04 May 2025 00:48:27 +0000 (0:00:00.555) 0:01:14.806 ************ 2025-05-04 00:49:31.360649 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360663 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360677 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360691 | orchestrator | 2025-05-04 00:49:31.360705 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-04 00:49:31.360719 | orchestrator | Sunday 04 May 2025 00:48:27 +0000 (0:00:00.488) 0:01:15.295 ************ 2025-05-04 00:49:31.360733 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360747 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360761 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360775 | orchestrator | 2025-05-04 00:49:31.360789 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-04 00:49:31.360803 | orchestrator | Sunday 04 May 2025 00:48:28 +0000 (0:00:00.346) 0:01:15.641 ************ 2025-05-04 00:49:31.360817 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360831 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360845 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360859 | orchestrator | 2025-05-04 00:49:31.360873 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-04 00:49:31.360887 | orchestrator | Sunday 04 May 2025 00:48:28 +0000 (0:00:00.597) 0:01:16.239 ************ 2025-05-04 00:49:31.360901 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.360915 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.360958 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.360974 | orchestrator | 2025-05-04 00:49:31.360988 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-04 00:49:31.361002 | orchestrator | Sunday 04 May 2025 00:48:29 +0000 (0:00:00.531) 0:01:16.771 ************ 2025-05-04 00:49:31.361017 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.361031 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.361045 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.361058 | orchestrator | 2025-05-04 00:49:31.361073 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-04 00:49:31.361087 | orchestrator | Sunday 04 May 2025 00:48:29 +0000 (0:00:00.431) 0:01:17.202 ************ 2025-05-04 00:49:31.361101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-05-04 00:49:31 | INFO  | Task 9c8b5475-2384-42d5-a66f-30cffc68d1bd is in state SUCCESS 2025-05-04 00:49:31.361162 | orchestrator | olla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361308 | orchestrator | 2025-05-04 00:49:31.361322 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-04 00:49:31.361336 | orchestrator | Sunday 04 May 2025 00:48:31 +0000 (0:00:01.457) 0:01:18.660 ************ 2025-05-04 00:49:31.361351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361499 | orchestrator | 2025-05-04 00:49:31.361514 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-04 00:49:31.361528 | orchestrator | Sunday 04 May 2025 00:48:35 +0000 (0:00:04.530) 0:01:23.191 ************ 2025-05-04 00:49:31.361548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.361694 | orchestrator | 2025-05-04 00:49:31.361708 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-04 00:49:31.361730 | orchestrator | Sunday 04 May 2025 00:48:38 +0000 (0:00:02.626) 0:01:25.817 ************ 2025-05-04 00:49:31.361745 | orchestrator | 2025-05-04 00:49:31.361759 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-04 00:49:31.361774 | orchestrator | Sunday 04 May 2025 00:48:38 +0000 (0:00:00.082) 0:01:25.900 ************ 2025-05-04 00:49:31.361788 | orchestrator | 2025-05-04 00:49:31.361802 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-04 00:49:31.361817 | orchestrator | Sunday 04 May 2025 00:48:38 +0000 (0:00:00.079) 0:01:25.979 ************ 2025-05-04 00:49:31.361831 | orchestrator | 2025-05-04 00:49:31.361845 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-04 00:49:31.361863 | orchestrator | Sunday 04 May 2025 00:48:38 +0000 (0:00:00.259) 0:01:26.239 ************ 2025-05-04 00:49:31.361878 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.361892 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.361906 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.361920 | orchestrator | 2025-05-04 00:49:31.361980 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-04 00:49:31.361996 | orchestrator | Sunday 04 May 2025 00:48:41 +0000 (0:00:02.940) 0:01:29.180 ************ 2025-05-04 00:49:31.362010 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.362056 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.362071 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.362086 | orchestrator | 2025-05-04 00:49:31.362100 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-04 00:49:31.362114 | orchestrator | Sunday 04 May 2025 00:48:44 +0000 (0:00:02.958) 0:01:32.138 ************ 2025-05-04 00:49:31.362128 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.362142 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.362156 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.362170 | orchestrator | 2025-05-04 00:49:31.362184 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-04 00:49:31.362198 | orchestrator | Sunday 04 May 2025 00:48:47 +0000 (0:00:02.845) 0:01:34.984 ************ 2025-05-04 00:49:31.362212 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.362226 | orchestrator | 2025-05-04 00:49:31.362240 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-04 00:49:31.362255 | orchestrator | Sunday 04 May 2025 00:48:47 +0000 (0:00:00.127) 0:01:35.112 ************ 2025-05-04 00:49:31.362268 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.362290 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.362305 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.362319 | orchestrator | 2025-05-04 00:49:31.362333 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-04 00:49:31.362347 | orchestrator | Sunday 04 May 2025 00:48:48 +0000 (0:00:01.035) 0:01:36.147 ************ 2025-05-04 00:49:31.362362 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.362376 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.362390 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.362404 | orchestrator | 2025-05-04 00:49:31.362418 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-04 00:49:31.362432 | orchestrator | Sunday 04 May 2025 00:48:49 +0000 (0:00:00.700) 0:01:36.848 ************ 2025-05-04 00:49:31.362446 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.362461 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.362475 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.362489 | orchestrator | 2025-05-04 00:49:31.362554 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-04 00:49:31.362595 | orchestrator | Sunday 04 May 2025 00:48:50 +0000 (0:00:01.056) 0:01:37.905 ************ 2025-05-04 00:49:31.362609 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.362622 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.362635 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.362658 | orchestrator | 2025-05-04 00:49:31.362671 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-04 00:49:31.362684 | orchestrator | Sunday 04 May 2025 00:48:51 +0000 (0:00:00.628) 0:01:38.533 ************ 2025-05-04 00:49:31.362724 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.362739 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.362752 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.362765 | orchestrator | 2025-05-04 00:49:31.362778 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-04 00:49:31.362790 | orchestrator | Sunday 04 May 2025 00:48:52 +0000 (0:00:01.310) 0:01:39.843 ************ 2025-05-04 00:49:31.362803 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.362815 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.362828 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.362840 | orchestrator | 2025-05-04 00:49:31.362853 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-04 00:49:31.362865 | orchestrator | Sunday 04 May 2025 00:48:53 +0000 (0:00:00.749) 0:01:40.592 ************ 2025-05-04 00:49:31.362878 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.362891 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.362915 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.362944 | orchestrator | 2025-05-04 00:49:31.362957 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-04 00:49:31.362970 | orchestrator | Sunday 04 May 2025 00:48:53 +0000 (0:00:00.564) 0:01:41.157 ************ 2025-05-04 00:49:31.363010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363026 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363039 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363064 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363077 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363118 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363158 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363171 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363183 | orchestrator | 2025-05-04 00:49:31.363196 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-04 00:49:31.363209 | orchestrator | Sunday 04 May 2025 00:48:55 +0000 (0:00:01.789) 0:01:42.947 ************ 2025-05-04 00:49:31.363221 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363234 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363248 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363265 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363357 | orchestrator | 2025-05-04 00:49:31.363369 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-04 00:49:31.363395 | orchestrator | Sunday 04 May 2025 00:48:59 +0000 (0:00:04.020) 0:01:46.968 ************ 2025-05-04 00:49:31.363408 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363421 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363434 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363447 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363460 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363472 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363496 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363515 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363529 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 00:49:31.363541 | orchestrator | 2025-05-04 00:49:31.363554 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-04 00:49:31.363567 | orchestrator | Sunday 04 May 2025 00:49:02 +0000 (0:00:03.087) 0:01:50.056 ************ 2025-05-04 00:49:31.363579 | orchestrator | 2025-05-04 00:49:31.363592 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-04 00:49:31.363604 | orchestrator | Sunday 04 May 2025 00:49:02 +0000 (0:00:00.242) 0:01:50.299 ************ 2025-05-04 00:49:31.363617 | orchestrator | 2025-05-04 00:49:31.363629 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-04 00:49:31.363642 | orchestrator | Sunday 04 May 2025 00:49:02 +0000 (0:00:00.062) 0:01:50.362 ************ 2025-05-04 00:49:31.363654 | orchestrator | 2025-05-04 00:49:31.363667 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-04 00:49:31.363680 | orchestrator | Sunday 04 May 2025 00:49:02 +0000 (0:00:00.064) 0:01:50.426 ************ 2025-05-04 00:49:31.363692 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.363705 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.363717 | orchestrator | 2025-05-04 00:49:31.363729 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-04 00:49:31.363742 | orchestrator | Sunday 04 May 2025 00:49:09 +0000 (0:00:06.712) 0:01:57.138 ************ 2025-05-04 00:49:31.363754 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.363767 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.363779 | orchestrator | 2025-05-04 00:49:31.363792 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-04 00:49:31.363804 | orchestrator | Sunday 04 May 2025 00:49:15 +0000 (0:00:06.193) 0:02:03.332 ************ 2025-05-04 00:49:31.363816 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:49:31.363829 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:49:31.363841 | orchestrator | 2025-05-04 00:49:31.363853 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-04 00:49:31.363866 | orchestrator | Sunday 04 May 2025 00:49:22 +0000 (0:00:06.418) 0:02:09.750 ************ 2025-05-04 00:49:31.363878 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:49:31.363891 | orchestrator | 2025-05-04 00:49:31.363903 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-04 00:49:31.363915 | orchestrator | Sunday 04 May 2025 00:49:22 +0000 (0:00:00.386) 0:02:10.137 ************ 2025-05-04 00:49:31.363971 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.363985 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.363998 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.364017 | orchestrator | 2025-05-04 00:49:31.364030 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-04 00:49:31.364042 | orchestrator | Sunday 04 May 2025 00:49:23 +0000 (0:00:00.792) 0:02:10.929 ************ 2025-05-04 00:49:31.364055 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.364067 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.364079 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.364092 | orchestrator | 2025-05-04 00:49:31.364104 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-04 00:49:31.364116 | orchestrator | Sunday 04 May 2025 00:49:24 +0000 (0:00:00.722) 0:02:11.652 ************ 2025-05-04 00:49:31.364129 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.364141 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.364153 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.364167 | orchestrator | 2025-05-04 00:49:31.364179 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-04 00:49:31.364192 | orchestrator | Sunday 04 May 2025 00:49:25 +0000 (0:00:01.287) 0:02:12.939 ************ 2025-05-04 00:49:31.364204 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:49:31.364224 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:49:31.364238 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:49:31.364249 | orchestrator | 2025-05-04 00:49:31.364259 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-04 00:49:31.364270 | orchestrator | Sunday 04 May 2025 00:49:26 +0000 (0:00:01.063) 0:02:14.002 ************ 2025-05-04 00:49:31.364280 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.364290 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.364300 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.364310 | orchestrator | 2025-05-04 00:49:31.364320 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-04 00:49:31.364331 | orchestrator | Sunday 04 May 2025 00:49:27 +0000 (0:00:00.795) 0:02:14.797 ************ 2025-05-04 00:49:31.364341 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:49:31.364351 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:49:31.364361 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:49:31.364371 | orchestrator | 2025-05-04 00:49:31.364381 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:49:31.364392 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-04 00:49:31.364408 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-04 00:49:34.414821 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-04 00:49:34.414957 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:49:34.414992 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:49:34.415003 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 00:49:34.415013 | orchestrator | 2025-05-04 00:49:34.415023 | orchestrator | 2025-05-04 00:49:34.415034 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:49:34.415045 | orchestrator | Sunday 04 May 2025 00:49:28 +0000 (0:00:01.137) 0:02:15.935 ************ 2025-05-04 00:49:34.415054 | orchestrator | =============================================================================== 2025-05-04 00:49:34.415064 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.62s 2025-05-04 00:49:34.415073 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.13s 2025-05-04 00:49:34.415109 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.65s 2025-05-04 00:49:34.415119 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.26s 2025-05-04 00:49:34.415129 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.15s 2025-05-04 00:49:34.415138 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.53s 2025-05-04 00:49:34.415152 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.02s 2025-05-04 00:49:34.415161 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2025-05-04 00:49:34.415171 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.77s 2025-05-04 00:49:34.415180 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.63s 2025-05-04 00:49:34.415189 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.41s 2025-05-04 00:49:34.415199 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.13s 2025-05-04 00:49:34.415208 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.97s 2025-05-04 00:49:34.415218 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.79s 2025-05-04 00:49:34.415227 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.76s 2025-05-04 00:49:34.415236 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.62s 2025-05-04 00:49:34.415245 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.53s 2025-05-04 00:49:34.415255 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-05-04 00:49:34.415264 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.31s 2025-05-04 00:49:34.415273 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 1.29s 2025-05-04 00:49:34.415284 | orchestrator | 2025-05-04 00:49:31 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:34.415293 | orchestrator | 2025-05-04 00:49:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:34.415303 | orchestrator | 2025-05-04 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:34.415326 | orchestrator | 2025-05-04 00:49:34 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:34.419003 | orchestrator | 2025-05-04 00:49:34 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:34.420715 | orchestrator | 2025-05-04 00:49:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:34.421085 | orchestrator | 2025-05-04 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:37.479150 | orchestrator | 2025-05-04 00:49:37 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:37.479987 | orchestrator | 2025-05-04 00:49:37 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:37.481461 | orchestrator | 2025-05-04 00:49:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:40.530089 | orchestrator | 2025-05-04 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:40.530260 | orchestrator | 2025-05-04 00:49:40 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:40.530755 | orchestrator | 2025-05-04 00:49:40 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:43.568178 | orchestrator | 2025-05-04 00:49:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:43.568307 | orchestrator | 2025-05-04 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:43.568381 | orchestrator | 2025-05-04 00:49:43 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:43.568992 | orchestrator | 2025-05-04 00:49:43 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:43.569029 | orchestrator | 2025-05-04 00:49:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:43.569546 | orchestrator | 2025-05-04 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:46.632473 | orchestrator | 2025-05-04 00:49:46 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:46.635290 | orchestrator | 2025-05-04 00:49:46 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:46.637042 | orchestrator | 2025-05-04 00:49:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:46.637441 | orchestrator | 2025-05-04 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:49.700321 | orchestrator | 2025-05-04 00:49:49 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:49.702700 | orchestrator | 2025-05-04 00:49:49 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:49.705777 | orchestrator | 2025-05-04 00:49:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:52.750569 | orchestrator | 2025-05-04 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:52.750759 | orchestrator | 2025-05-04 00:49:52 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:52.751196 | orchestrator | 2025-05-04 00:49:52 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:52.751239 | orchestrator | 2025-05-04 00:49:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:55.810806 | orchestrator | 2025-05-04 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:55.811089 | orchestrator | 2025-05-04 00:49:55 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:55.811129 | orchestrator | 2025-05-04 00:49:55 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:58.861563 | orchestrator | 2025-05-04 00:49:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:49:58.861701 | orchestrator | 2025-05-04 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:49:58.861739 | orchestrator | 2025-05-04 00:49:58 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:49:58.862208 | orchestrator | 2025-05-04 00:49:58 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:49:58.862245 | orchestrator | 2025-05-04 00:49:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:01.906987 | orchestrator | 2025-05-04 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:01.907135 | orchestrator | 2025-05-04 00:50:01 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:01.907661 | orchestrator | 2025-05-04 00:50:01 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:01.907967 | orchestrator | 2025-05-04 00:50:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:04.955408 | orchestrator | 2025-05-04 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:04.955606 | orchestrator | 2025-05-04 00:50:04 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:04.956031 | orchestrator | 2025-05-04 00:50:04 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:04.956067 | orchestrator | 2025-05-04 00:50:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:08.009307 | orchestrator | 2025-05-04 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:08.009454 | orchestrator | 2025-05-04 00:50:08 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:08.009615 | orchestrator | 2025-05-04 00:50:08 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:08.010421 | orchestrator | 2025-05-04 00:50:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:11.060131 | orchestrator | 2025-05-04 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:11.060286 | orchestrator | 2025-05-04 00:50:11 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:11.061421 | orchestrator | 2025-05-04 00:50:11 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:11.067684 | orchestrator | 2025-05-04 00:50:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:14.117255 | orchestrator | 2025-05-04 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:14.117402 | orchestrator | 2025-05-04 00:50:14 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:14.119293 | orchestrator | 2025-05-04 00:50:14 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:14.121061 | orchestrator | 2025-05-04 00:50:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:17.172623 | orchestrator | 2025-05-04 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:17.172783 | orchestrator | 2025-05-04 00:50:17 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:17.173777 | orchestrator | 2025-05-04 00:50:17 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:17.175537 | orchestrator | 2025-05-04 00:50:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:17.175753 | orchestrator | 2025-05-04 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:20.238009 | orchestrator | 2025-05-04 00:50:20 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:23.293155 | orchestrator | 2025-05-04 00:50:20 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:23.293283 | orchestrator | 2025-05-04 00:50:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:23.293300 | orchestrator | 2025-05-04 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:23.293331 | orchestrator | 2025-05-04 00:50:23 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:26.329941 | orchestrator | 2025-05-04 00:50:23 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:26.330148 | orchestrator | 2025-05-04 00:50:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:26.330171 | orchestrator | 2025-05-04 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:26.330206 | orchestrator | 2025-05-04 00:50:26 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:26.330579 | orchestrator | 2025-05-04 00:50:26 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:26.332066 | orchestrator | 2025-05-04 00:50:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:26.332286 | orchestrator | 2025-05-04 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:29.387928 | orchestrator | 2025-05-04 00:50:29 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:29.389912 | orchestrator | 2025-05-04 00:50:29 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:29.392212 | orchestrator | 2025-05-04 00:50:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:29.392738 | orchestrator | 2025-05-04 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:32.437820 | orchestrator | 2025-05-04 00:50:32 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:32.438174 | orchestrator | 2025-05-04 00:50:32 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:32.438213 | orchestrator | 2025-05-04 00:50:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:35.491968 | orchestrator | 2025-05-04 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:35.492120 | orchestrator | 2025-05-04 00:50:35 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:35.493162 | orchestrator | 2025-05-04 00:50:35 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:35.494105 | orchestrator | 2025-05-04 00:50:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:38.546364 | orchestrator | 2025-05-04 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:38.546534 | orchestrator | 2025-05-04 00:50:38 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:41.599798 | orchestrator | 2025-05-04 00:50:38 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:41.599984 | orchestrator | 2025-05-04 00:50:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:41.600007 | orchestrator | 2025-05-04 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:41.600042 | orchestrator | 2025-05-04 00:50:41 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:41.601476 | orchestrator | 2025-05-04 00:50:41 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:41.603570 | orchestrator | 2025-05-04 00:50:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:44.653307 | orchestrator | 2025-05-04 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:44.653455 | orchestrator | 2025-05-04 00:50:44 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:44.657529 | orchestrator | 2025-05-04 00:50:44 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:44.659036 | orchestrator | 2025-05-04 00:50:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:44.659281 | orchestrator | 2025-05-04 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:47.698233 | orchestrator | 2025-05-04 00:50:47 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:47.701644 | orchestrator | 2025-05-04 00:50:47 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:47.702200 | orchestrator | 2025-05-04 00:50:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:50.751638 | orchestrator | 2025-05-04 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:50.751797 | orchestrator | 2025-05-04 00:50:50 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:50.753448 | orchestrator | 2025-05-04 00:50:50 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:50.755514 | orchestrator | 2025-05-04 00:50:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:50.755642 | orchestrator | 2025-05-04 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:53.823621 | orchestrator | 2025-05-04 00:50:53 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:53.824554 | orchestrator | 2025-05-04 00:50:53 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:53.825182 | orchestrator | 2025-05-04 00:50:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:53.825423 | orchestrator | 2025-05-04 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:56.884500 | orchestrator | 2025-05-04 00:50:56 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:56.885365 | orchestrator | 2025-05-04 00:50:56 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:50:56.887317 | orchestrator | 2025-05-04 00:50:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:50:59.929080 | orchestrator | 2025-05-04 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:50:59.929235 | orchestrator | 2025-05-04 00:50:59 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:50:59.930199 | orchestrator | 2025-05-04 00:50:59 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:02.984357 | orchestrator | 2025-05-04 00:50:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:02.984469 | orchestrator | 2025-05-04 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:02.984500 | orchestrator | 2025-05-04 00:51:02 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:06.050409 | orchestrator | 2025-05-04 00:51:02 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:06.050517 | orchestrator | 2025-05-04 00:51:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:06.050529 | orchestrator | 2025-05-04 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:06.050552 | orchestrator | 2025-05-04 00:51:06 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:06.051097 | orchestrator | 2025-05-04 00:51:06 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:06.052043 | orchestrator | 2025-05-04 00:51:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:09.100289 | orchestrator | 2025-05-04 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:09.100536 | orchestrator | 2025-05-04 00:51:09 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:09.101443 | orchestrator | 2025-05-04 00:51:09 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:09.101490 | orchestrator | 2025-05-04 00:51:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:12.153112 | orchestrator | 2025-05-04 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:12.153269 | orchestrator | 2025-05-04 00:51:12 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:12.153429 | orchestrator | 2025-05-04 00:51:12 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:12.154539 | orchestrator | 2025-05-04 00:51:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:15.210118 | orchestrator | 2025-05-04 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:15.210260 | orchestrator | 2025-05-04 00:51:15 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:15.211550 | orchestrator | 2025-05-04 00:51:15 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:15.212390 | orchestrator | 2025-05-04 00:51:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:15.212491 | orchestrator | 2025-05-04 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:18.260428 | orchestrator | 2025-05-04 00:51:18 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:18.262333 | orchestrator | 2025-05-04 00:51:18 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:18.264466 | orchestrator | 2025-05-04 00:51:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:21.320607 | orchestrator | 2025-05-04 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:21.320757 | orchestrator | 2025-05-04 00:51:21 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:21.321647 | orchestrator | 2025-05-04 00:51:21 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:21.321710 | orchestrator | 2025-05-04 00:51:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:24.379034 | orchestrator | 2025-05-04 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:24.379190 | orchestrator | 2025-05-04 00:51:24 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:24.380438 | orchestrator | 2025-05-04 00:51:24 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:24.381948 | orchestrator | 2025-05-04 00:51:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:24.382125 | orchestrator | 2025-05-04 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:27.433817 | orchestrator | 2025-05-04 00:51:27 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:27.434922 | orchestrator | 2025-05-04 00:51:27 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:27.436888 | orchestrator | 2025-05-04 00:51:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:30.479249 | orchestrator | 2025-05-04 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:30.479414 | orchestrator | 2025-05-04 00:51:30 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:30.480402 | orchestrator | 2025-05-04 00:51:30 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:30.481970 | orchestrator | 2025-05-04 00:51:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:33.530175 | orchestrator | 2025-05-04 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:33.530319 | orchestrator | 2025-05-04 00:51:33 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:33.534211 | orchestrator | 2025-05-04 00:51:33 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:33.537661 | orchestrator | 2025-05-04 00:51:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:33.538583 | orchestrator | 2025-05-04 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:36.601293 | orchestrator | 2025-05-04 00:51:36 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:36.602902 | orchestrator | 2025-05-04 00:51:36 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:36.604610 | orchestrator | 2025-05-04 00:51:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:36.605222 | orchestrator | 2025-05-04 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:39.658142 | orchestrator | 2025-05-04 00:51:39 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:39.661464 | orchestrator | 2025-05-04 00:51:39 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:39.661550 | orchestrator | 2025-05-04 00:51:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:42.720350 | orchestrator | 2025-05-04 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:42.720531 | orchestrator | 2025-05-04 00:51:42 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:42.721462 | orchestrator | 2025-05-04 00:51:42 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:42.725337 | orchestrator | 2025-05-04 00:51:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:45.792425 | orchestrator | 2025-05-04 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:45.792555 | orchestrator | 2025-05-04 00:51:45 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:45.794289 | orchestrator | 2025-05-04 00:51:45 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:45.798169 | orchestrator | 2025-05-04 00:51:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:48.864433 | orchestrator | 2025-05-04 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:48.864612 | orchestrator | 2025-05-04 00:51:48 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:48.866634 | orchestrator | 2025-05-04 00:51:48 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:48.868777 | orchestrator | 2025-05-04 00:51:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:51.918508 | orchestrator | 2025-05-04 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:51.918675 | orchestrator | 2025-05-04 00:51:51 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:54.970317 | orchestrator | 2025-05-04 00:51:51 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:54.970462 | orchestrator | 2025-05-04 00:51:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:54.970482 | orchestrator | 2025-05-04 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:54.970556 | orchestrator | 2025-05-04 00:51:54 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:54.972099 | orchestrator | 2025-05-04 00:51:54 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:54.972227 | orchestrator | 2025-05-04 00:51:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:58.030137 | orchestrator | 2025-05-04 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:51:58.030270 | orchestrator | 2025-05-04 00:51:58 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:51:58.033403 | orchestrator | 2025-05-04 00:51:58 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:51:58.036350 | orchestrator | 2025-05-04 00:51:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:51:58.036792 | orchestrator | 2025-05-04 00:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:01.102142 | orchestrator | 2025-05-04 00:52:01 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:01.104672 | orchestrator | 2025-05-04 00:52:01 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:01.107152 | orchestrator | 2025-05-04 00:52:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:01.107332 | orchestrator | 2025-05-04 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:04.174424 | orchestrator | 2025-05-04 00:52:04 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:04.176423 | orchestrator | 2025-05-04 00:52:04 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:04.180636 | orchestrator | 2025-05-04 00:52:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:07.218312 | orchestrator | 2025-05-04 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:07.218469 | orchestrator | 2025-05-04 00:52:07 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:07.219942 | orchestrator | 2025-05-04 00:52:07 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:07.221598 | orchestrator | 2025-05-04 00:52:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:10.276489 | orchestrator | 2025-05-04 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:10.276652 | orchestrator | 2025-05-04 00:52:10 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:10.281022 | orchestrator | 2025-05-04 00:52:10 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:10.282441 | orchestrator | 2025-05-04 00:52:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:13.333979 | orchestrator | 2025-05-04 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:13.334223 | orchestrator | 2025-05-04 00:52:13 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:13.335947 | orchestrator | 2025-05-04 00:52:13 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:13.337806 | orchestrator | 2025-05-04 00:52:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:13.338202 | orchestrator | 2025-05-04 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:16.386294 | orchestrator | 2025-05-04 00:52:16 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:16.391028 | orchestrator | 2025-05-04 00:52:16 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:16.391084 | orchestrator | 2025-05-04 00:52:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:19.438419 | orchestrator | 2025-05-04 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:19.438583 | orchestrator | 2025-05-04 00:52:19 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:19.439084 | orchestrator | 2025-05-04 00:52:19 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:19.440718 | orchestrator | 2025-05-04 00:52:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:22.485589 | orchestrator | 2025-05-04 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:22.485732 | orchestrator | 2025-05-04 00:52:22 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:22.489458 | orchestrator | 2025-05-04 00:52:22 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:22.491225 | orchestrator | 2025-05-04 00:52:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:25.539421 | orchestrator | 2025-05-04 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:25.539566 | orchestrator | 2025-05-04 00:52:25 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:25.539765 | orchestrator | 2025-05-04 00:52:25 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:25.539795 | orchestrator | 2025-05-04 00:52:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:28.583572 | orchestrator | 2025-05-04 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:28.583700 | orchestrator | 2025-05-04 00:52:28 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:28.585874 | orchestrator | 2025-05-04 00:52:28 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:28.586627 | orchestrator | 2025-05-04 00:52:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:31.646514 | orchestrator | 2025-05-04 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:31.646693 | orchestrator | 2025-05-04 00:52:31 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:31.649175 | orchestrator | 2025-05-04 00:52:31 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:31.650897 | orchestrator | 2025-05-04 00:52:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:31.651940 | orchestrator | 2025-05-04 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:34.705990 | orchestrator | 2025-05-04 00:52:34 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:34.709498 | orchestrator | 2025-05-04 00:52:34 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:34.710795 | orchestrator | 2025-05-04 00:52:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:37.766820 | orchestrator | 2025-05-04 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:37.767021 | orchestrator | 2025-05-04 00:52:37 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:37.772037 | orchestrator | 2025-05-04 00:52:37 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:37.774116 | orchestrator | 2025-05-04 00:52:37 | INFO  | Task 1865bdc9-451a-4a44-8d0b-f89a0f8a1748 is in state STARTED 2025-05-04 00:52:37.774176 | orchestrator | 2025-05-04 00:52:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:37.774558 | orchestrator | 2025-05-04 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:40.834373 | orchestrator | 2025-05-04 00:52:40 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:40.836034 | orchestrator | 2025-05-04 00:52:40 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:40.837401 | orchestrator | 2025-05-04 00:52:40 | INFO  | Task 1865bdc9-451a-4a44-8d0b-f89a0f8a1748 is in state STARTED 2025-05-04 00:52:40.838702 | orchestrator | 2025-05-04 00:52:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:40.838809 | orchestrator | 2025-05-04 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:43.901067 | orchestrator | 2025-05-04 00:52:43 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:43.905188 | orchestrator | 2025-05-04 00:52:43 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:43.906461 | orchestrator | 2025-05-04 00:52:43 | INFO  | Task 1865bdc9-451a-4a44-8d0b-f89a0f8a1748 is in state STARTED 2025-05-04 00:52:43.907964 | orchestrator | 2025-05-04 00:52:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:46.962078 | orchestrator | 2025-05-04 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:46.962233 | orchestrator | 2025-05-04 00:52:46 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:46.962881 | orchestrator | 2025-05-04 00:52:46 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:46.966154 | orchestrator | 2025-05-04 00:52:46 | INFO  | Task 1865bdc9-451a-4a44-8d0b-f89a0f8a1748 is in state STARTED 2025-05-04 00:52:46.967227 | orchestrator | 2025-05-04 00:52:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:46.967340 | orchestrator | 2025-05-04 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:50.070882 | orchestrator | 2025-05-04 00:52:50 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:50.075331 | orchestrator | 2025-05-04 00:52:50 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:50.075367 | orchestrator | 2025-05-04 00:52:50 | INFO  | Task 1865bdc9-451a-4a44-8d0b-f89a0f8a1748 is in state SUCCESS 2025-05-04 00:52:50.075388 | orchestrator | 2025-05-04 00:52:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:53.127338 | orchestrator | 2025-05-04 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:53.127489 | orchestrator | 2025-05-04 00:52:53 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:53.129110 | orchestrator | 2025-05-04 00:52:53 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:53.129152 | orchestrator | 2025-05-04 00:52:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:53.129353 | orchestrator | 2025-05-04 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:56.177344 | orchestrator | 2025-05-04 00:52:56 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:56.178994 | orchestrator | 2025-05-04 00:52:56 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:56.181697 | orchestrator | 2025-05-04 00:52:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:52:59.226448 | orchestrator | 2025-05-04 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:52:59.226594 | orchestrator | 2025-05-04 00:52:59 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:52:59.229307 | orchestrator | 2025-05-04 00:52:59 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:52:59.229355 | orchestrator | 2025-05-04 00:52:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:02.270605 | orchestrator | 2025-05-04 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:02.270917 | orchestrator | 2025-05-04 00:53:02 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:02.271159 | orchestrator | 2025-05-04 00:53:02 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:02.271221 | orchestrator | 2025-05-04 00:53:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:05.311369 | orchestrator | 2025-05-04 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:05.311520 | orchestrator | 2025-05-04 00:53:05 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:05.312231 | orchestrator | 2025-05-04 00:53:05 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:05.314290 | orchestrator | 2025-05-04 00:53:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:08.366664 | orchestrator | 2025-05-04 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:08.366872 | orchestrator | 2025-05-04 00:53:08 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:08.368541 | orchestrator | 2025-05-04 00:53:08 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:08.370853 | orchestrator | 2025-05-04 00:53:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:11.415195 | orchestrator | 2025-05-04 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:11.415389 | orchestrator | 2025-05-04 00:53:11 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:14.458936 | orchestrator | 2025-05-04 00:53:11 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:14.459025 | orchestrator | 2025-05-04 00:53:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:14.459034 | orchestrator | 2025-05-04 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:14.459051 | orchestrator | 2025-05-04 00:53:14 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:14.460686 | orchestrator | 2025-05-04 00:53:14 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:14.462285 | orchestrator | 2025-05-04 00:53:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:14.462658 | orchestrator | 2025-05-04 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:17.514112 | orchestrator | 2025-05-04 00:53:17 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:17.514729 | orchestrator | 2025-05-04 00:53:17 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:17.516910 | orchestrator | 2025-05-04 00:53:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:20.565119 | orchestrator | 2025-05-04 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:20.565288 | orchestrator | 2025-05-04 00:53:20 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:20.565437 | orchestrator | 2025-05-04 00:53:20 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:20.567867 | orchestrator | 2025-05-04 00:53:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:23.610011 | orchestrator | 2025-05-04 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:23.610332 | orchestrator | 2025-05-04 00:53:23 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:23.610988 | orchestrator | 2025-05-04 00:53:23 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:23.612002 | orchestrator | 2025-05-04 00:53:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:23.612134 | orchestrator | 2025-05-04 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:26.657215 | orchestrator | 2025-05-04 00:53:26 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:26.657754 | orchestrator | 2025-05-04 00:53:26 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state STARTED 2025-05-04 00:53:26.658973 | orchestrator | 2025-05-04 00:53:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:29.717094 | orchestrator | 2025-05-04 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:29.717253 | orchestrator | 2025-05-04 00:53:29 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:29.719385 | orchestrator | 2025-05-04 00:53:29 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:29.727925 | orchestrator | 2025-05-04 00:53:29 | INFO  | Task 72a36c02-0ea8-44f9-a833-fae68f9bdf24 is in state SUCCESS 2025-05-04 00:53:29.728213 | orchestrator | 2025-05-04 00:53:29.730422 | orchestrator | None 2025-05-04 00:53:29.730504 | orchestrator | 2025-05-04 00:53:29.730535 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:53:29.730565 | orchestrator | 2025-05-04 00:53:29.730593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:53:29.730620 | orchestrator | Sunday 04 May 2025 00:45:56 +0000 (0:00:00.327) 0:00:00.327 ************ 2025-05-04 00:53:29.730649 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.730681 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.730710 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.730739 | orchestrator | 2025-05-04 00:53:29.730767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:53:29.730795 | orchestrator | Sunday 04 May 2025 00:45:56 +0000 (0:00:00.314) 0:00:00.642 ************ 2025-05-04 00:53:29.730857 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-04 00:53:29.730883 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-04 00:53:29.730905 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-04 00:53:29.730929 | orchestrator | 2025-05-04 00:53:29.730955 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-04 00:53:29.730979 | orchestrator | 2025-05-04 00:53:29.731004 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-04 00:53:29.731030 | orchestrator | Sunday 04 May 2025 00:45:56 +0000 (0:00:00.403) 0:00:01.046 ************ 2025-05-04 00:53:29.731086 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.731103 | orchestrator | 2025-05-04 00:53:29.731119 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-04 00:53:29.731135 | orchestrator | Sunday 04 May 2025 00:45:57 +0000 (0:00:00.980) 0:00:02.026 ************ 2025-05-04 00:53:29.731152 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.731167 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.731183 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.731199 | orchestrator | 2025-05-04 00:53:29.731214 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-04 00:53:29.731230 | orchestrator | Sunday 04 May 2025 00:45:59 +0000 (0:00:01.208) 0:00:03.235 ************ 2025-05-04 00:53:29.731245 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.731261 | orchestrator | 2025-05-04 00:53:29.731277 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-04 00:53:29.731292 | orchestrator | Sunday 04 May 2025 00:46:00 +0000 (0:00:01.107) 0:00:04.343 ************ 2025-05-04 00:53:29.731306 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.731320 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.731334 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.731348 | orchestrator | 2025-05-04 00:53:29.731363 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-04 00:53:29.731377 | orchestrator | Sunday 04 May 2025 00:46:01 +0000 (0:00:00.882) 0:00:05.226 ************ 2025-05-04 00:53:29.731391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-04 00:53:29.731406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-04 00:53:29.731420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-04 00:53:29.731434 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-04 00:53:29.731449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-04 00:53:29.731463 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-04 00:53:29.731477 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-04 00:53:29.731492 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-04 00:53:29.731507 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-04 00:53:29.731521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-04 00:53:29.731535 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-04 00:53:29.731549 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-04 00:53:29.731563 | orchestrator | 2025-05-04 00:53:29.731577 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-04 00:53:29.731592 | orchestrator | Sunday 04 May 2025 00:46:05 +0000 (0:00:04.115) 0:00:09.341 ************ 2025-05-04 00:53:29.731606 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-04 00:53:29.731638 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-04 00:53:29.731652 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-04 00:53:29.731667 | orchestrator | 2025-05-04 00:53:29.731681 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-04 00:53:29.731695 | orchestrator | Sunday 04 May 2025 00:46:06 +0000 (0:00:00.967) 0:00:10.308 ************ 2025-05-04 00:53:29.731709 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-04 00:53:29.731723 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-04 00:53:29.731746 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-04 00:53:29.731760 | orchestrator | 2025-05-04 00:53:29.731775 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-04 00:53:29.731789 | orchestrator | Sunday 04 May 2025 00:46:07 +0000 (0:00:01.488) 0:00:11.797 ************ 2025-05-04 00:53:29.731803 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-04 00:53:29.731851 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.731957 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-04 00:53:29.731978 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.731993 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-04 00:53:29.732008 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.732022 | orchestrator | 2025-05-04 00:53:29.732036 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-04 00:53:29.732051 | orchestrator | Sunday 04 May 2025 00:46:08 +0000 (0:00:00.634) 0:00:12.431 ************ 2025-05-04 00:53:29.732068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.732091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.732106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.732121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.732137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.732169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.732185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.732202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.732218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.732233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.732248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.732278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.732303 | orchestrator | 2025-05-04 00:53:29.732328 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-04 00:53:29.732355 | orchestrator | Sunday 04 May 2025 00:46:11 +0000 (0:00:02.659) 0:00:15.091 ************ 2025-05-04 00:53:29.732384 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.732412 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.732438 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.732466 | orchestrator | 2025-05-04 00:53:29.732503 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-04 00:53:29.732533 | orchestrator | Sunday 04 May 2025 00:46:13 +0000 (0:00:02.178) 0:00:17.270 ************ 2025-05-04 00:53:29.732562 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-04 00:53:29.732588 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-04 00:53:29.732614 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-04 00:53:29.732637 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-04 00:53:29.732652 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-04 00:53:29.732666 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-04 00:53:29.732680 | orchestrator | 2025-05-04 00:53:29.732695 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-04 00:53:29.732721 | orchestrator | Sunday 04 May 2025 00:46:16 +0000 (0:00:03.515) 0:00:20.785 ************ 2025-05-04 00:53:29.732736 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.732750 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.732765 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.732778 | orchestrator | 2025-05-04 00:53:29.732793 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-04 00:53:29.732860 | orchestrator | Sunday 04 May 2025 00:46:18 +0000 (0:00:01.582) 0:00:22.367 ************ 2025-05-04 00:53:29.732879 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.732894 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.732908 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.732923 | orchestrator | 2025-05-04 00:53:29.732937 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-04 00:53:29.732951 | orchestrator | Sunday 04 May 2025 00:46:20 +0000 (0:00:01.910) 0:00:24.278 ************ 2025-05-04 00:53:29.732966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.732982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.733008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.733023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.733047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.733062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.733077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.733093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.733116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.733131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.733145 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.733160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.733175 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.733196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.733212 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.733226 | orchestrator | 2025-05-04 00:53:29.733240 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-04 00:53:29.733255 | orchestrator | Sunday 04 May 2025 00:46:22 +0000 (0:00:02.731) 0:00:27.009 ************ 2025-05-04 00:53:29.733269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.733284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.733306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.733321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.733342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.733357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.733373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.733388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.733409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.733424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.733439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.733468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.733927 | orchestrator | 2025-05-04 00:53:29.733977 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-04 00:53:29.734004 | orchestrator | Sunday 04 May 2025 00:46:27 +0000 (0:00:04.159) 0:00:31.169 ************ 2025-05-04 00:53:29.734094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.734127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.734213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.734232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.734247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.734274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.734290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.734305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.734329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.734345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.734360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.734375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.734389 | orchestrator | 2025-05-04 00:53:29.734405 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-04 00:53:29.734430 | orchestrator | Sunday 04 May 2025 00:46:30 +0000 (0:00:03.286) 0:00:34.455 ************ 2025-05-04 00:53:29.734468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-04 00:53:29.734498 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-04 00:53:29.734660 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-04 00:53:29.734682 | orchestrator | 2025-05-04 00:53:29.734729 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-04 00:53:29.734758 | orchestrator | Sunday 04 May 2025 00:46:32 +0000 (0:00:01.952) 0:00:36.407 ************ 2025-05-04 00:53:29.734783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-04 00:53:29.734860 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-04 00:53:29.734887 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-04 00:53:29.734912 | orchestrator | 2025-05-04 00:53:29.734938 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-04 00:53:29.734963 | orchestrator | Sunday 04 May 2025 00:46:36 +0000 (0:00:03.967) 0:00:40.375 ************ 2025-05-04 00:53:29.734989 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.735014 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.735078 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.735100 | orchestrator | 2025-05-04 00:53:29.735124 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-04 00:53:29.735150 | orchestrator | Sunday 04 May 2025 00:46:38 +0000 (0:00:02.498) 0:00:42.873 ************ 2025-05-04 00:53:29.735174 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-04 00:53:29.735284 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-04 00:53:29.735315 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-04 00:53:29.735341 | orchestrator | 2025-05-04 00:53:29.735368 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-04 00:53:29.735543 | orchestrator | Sunday 04 May 2025 00:46:41 +0000 (0:00:02.744) 0:00:45.617 ************ 2025-05-04 00:53:29.735576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-04 00:53:29.735602 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-04 00:53:29.735624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-04 00:53:29.735643 | orchestrator | 2025-05-04 00:53:29.735720 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-04 00:53:29.735749 | orchestrator | Sunday 04 May 2025 00:46:43 +0000 (0:00:02.011) 0:00:47.629 ************ 2025-05-04 00:53:29.735869 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-04 00:53:29.735920 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-04 00:53:29.735946 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-04 00:53:29.735972 | orchestrator | 2025-05-04 00:53:29.735996 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-04 00:53:29.736021 | orchestrator | Sunday 04 May 2025 00:46:45 +0000 (0:00:01.923) 0:00:49.552 ************ 2025-05-04 00:53:29.736045 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-04 00:53:29.736070 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-04 00:53:29.736097 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-04 00:53:29.736184 | orchestrator | 2025-05-04 00:53:29.736204 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-04 00:53:29.736219 | orchestrator | Sunday 04 May 2025 00:46:48 +0000 (0:00:02.577) 0:00:52.129 ************ 2025-05-04 00:53:29.736260 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.736276 | orchestrator | 2025-05-04 00:53:29.736294 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-04 00:53:29.736458 | orchestrator | Sunday 04 May 2025 00:46:49 +0000 (0:00:01.006) 0:00:53.136 ************ 2025-05-04 00:53:29.736491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.736556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.736674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.736949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.737004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.737021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.737037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.737097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.737134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.737228 | orchestrator | 2025-05-04 00:53:29.737243 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-04 00:53:29.737286 | orchestrator | Sunday 04 May 2025 00:46:52 +0000 (0:00:03.515) 0:00:56.652 ************ 2025-05-04 00:53:29.737310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.737338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.737367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.737395 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.737423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.737461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.737546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.737566 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.737648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.737673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.737696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.737718 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.737731 | orchestrator | 2025-05-04 00:53:29.737748 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-04 00:53:29.737769 | orchestrator | Sunday 04 May 2025 00:46:53 +0000 (0:00:00.698) 0:00:57.351 ************ 2025-05-04 00:53:29.737787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.737843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.737885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.737908 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.737926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.737939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.737953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.737965 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.737978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-04 00:53:29.737998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-04 00:53:29.738180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-04 00:53:29.738204 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.738272 | orchestrator | 2025-05-04 00:53:29.738299 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-04 00:53:29.738320 | orchestrator | Sunday 04 May 2025 00:46:54 +0000 (0:00:01.417) 0:00:58.768 ************ 2025-05-04 00:53:29.738352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-04 00:53:29.738376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-04 00:53:29.738397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-04 00:53:29.738419 | orchestrator | 2025-05-04 00:53:29.738443 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-04 00:53:29.738463 | orchestrator | Sunday 04 May 2025 00:46:56 +0000 (0:00:01.940) 0:01:00.709 ************ 2025-05-04 00:53:29.738485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-04 00:53:29.738508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-04 00:53:29.738532 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-04 00:53:29.738555 | orchestrator | 2025-05-04 00:53:29.738577 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-04 00:53:29.738599 | orchestrator | Sunday 04 May 2025 00:46:58 +0000 (0:00:02.258) 0:01:02.968 ************ 2025-05-04 00:53:29.738621 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-04 00:53:29.738653 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-04 00:53:29.738676 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-04 00:53:29.738694 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-04 00:53:29.738786 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.738921 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-04 00:53:29.739038 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.739062 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-04 00:53:29.739084 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.739121 | orchestrator | 2025-05-04 00:53:29.739144 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-04 00:53:29.739162 | orchestrator | Sunday 04 May 2025 00:47:00 +0000 (0:00:02.066) 0:01:05.034 ************ 2025-05-04 00:53:29.739176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.739190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.739204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-04 00:53:29.739239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.742327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.742440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-04 00:53:29.742472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.742483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.742496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.742519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.742530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-04 00:53:29.742545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15', '__omit_place_holder__397b2045980cc3abe09f757bf6e811b4f3204a15'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-04 00:53:29.742562 | orchestrator | 2025-05-04 00:53:29.742573 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-04 00:53:29.742584 | orchestrator | Sunday 04 May 2025 00:47:04 +0000 (0:00:03.773) 0:01:08.808 ************ 2025-05-04 00:53:29.742595 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.742621 | orchestrator | 2025-05-04 00:53:29.742632 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-04 00:53:29.742643 | orchestrator | Sunday 04 May 2025 00:47:05 +0000 (0:00:00.678) 0:01:09.486 ************ 2025-05-04 00:53:29.742654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-04 00:53:29.742665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.742676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-04 00:53:29.742764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.742775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-04 00:53:29.742859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.742873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742901 | orchestrator | 2025-05-04 00:53:29.742912 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-04 00:53:29.742923 | orchestrator | Sunday 04 May 2025 00:47:08 +0000 (0:00:03.346) 0:01:12.832 ************ 2025-05-04 00:53:29.742934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-04 00:53:29.742944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.742955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.742982 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.743110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-04 00:53:29.743132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.743148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743170 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.743180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-04 00:53:29.743199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.743220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743247 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.743258 | orchestrator | 2025-05-04 00:53:29.743268 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-04 00:53:29.743279 | orchestrator | Sunday 04 May 2025 00:47:09 +0000 (0:00:01.229) 0:01:14.062 ************ 2025-05-04 00:53:29.743289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-04 00:53:29.743302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-04 00:53:29.743312 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.743323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-04 00:53:29.743333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-04 00:53:29.743343 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.743354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-04 00:53:29.743364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-04 00:53:29.743374 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.743389 | orchestrator | 2025-05-04 00:53:29.743400 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-04 00:53:29.743410 | orchestrator | Sunday 04 May 2025 00:47:11 +0000 (0:00:01.346) 0:01:15.409 ************ 2025-05-04 00:53:29.743421 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.743431 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.743441 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.743451 | orchestrator | 2025-05-04 00:53:29.743462 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-04 00:53:29.743472 | orchestrator | Sunday 04 May 2025 00:47:12 +0000 (0:00:01.302) 0:01:16.712 ************ 2025-05-04 00:53:29.743483 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.743493 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.743503 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.743513 | orchestrator | 2025-05-04 00:53:29.743524 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-04 00:53:29.743534 | orchestrator | Sunday 04 May 2025 00:47:15 +0000 (0:00:02.461) 0:01:19.174 ************ 2025-05-04 00:53:29.743552 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.743562 | orchestrator | 2025-05-04 00:53:29.743608 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-04 00:53:29.743621 | orchestrator | Sunday 04 May 2025 00:47:15 +0000 (0:00:00.741) 0:01:19.915 ************ 2025-05-04 00:53:29.743668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.743681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.743780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.743846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743867 | orchestrator | 2025-05-04 00:53:29.743878 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-04 00:53:29.743889 | orchestrator | Sunday 04 May 2025 00:47:21 +0000 (0:00:05.163) 0:01:25.079 ************ 2025-05-04 00:53:29.743912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.743941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.743964 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.743975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.743993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.744005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.744021 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.744038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.744049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.744060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.744071 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.744081 | orchestrator | 2025-05-04 00:53:29.744092 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-04 00:53:29.744102 | orchestrator | Sunday 04 May 2025 00:47:21 +0000 (0:00:00.984) 0:01:26.064 ************ 2025-05-04 00:53:29.744113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-04 00:53:29.744124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-04 00:53:29.744135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-04 00:53:29.744147 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.744162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-04 00:53:29.744266 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.744280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-04 00:53:29.744290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-04 00:53:29.744301 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.744311 | orchestrator | 2025-05-04 00:53:29.744322 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-04 00:53:29.744332 | orchestrator | Sunday 04 May 2025 00:47:23 +0000 (0:00:01.237) 0:01:27.301 ************ 2025-05-04 00:53:29.744343 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.744353 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.744363 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.744373 | orchestrator | 2025-05-04 00:53:29.744384 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-04 00:53:29.744394 | orchestrator | Sunday 04 May 2025 00:47:24 +0000 (0:00:01.418) 0:01:28.720 ************ 2025-05-04 00:53:29.744404 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.744414 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.744424 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.744453 | orchestrator | 2025-05-04 00:53:29.744464 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-04 00:53:29.744475 | orchestrator | Sunday 04 May 2025 00:47:26 +0000 (0:00:01.925) 0:01:30.645 ************ 2025-05-04 00:53:29.744487 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.744504 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.744520 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.744532 | orchestrator | 2025-05-04 00:53:29.744549 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-04 00:53:29.744560 | orchestrator | Sunday 04 May 2025 00:47:26 +0000 (0:00:00.248) 0:01:30.894 ************ 2025-05-04 00:53:29.744570 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.744580 | orchestrator | 2025-05-04 00:53:29.744590 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-04 00:53:29.744601 | orchestrator | Sunday 04 May 2025 00:47:27 +0000 (0:00:00.702) 0:01:31.596 ************ 2025-05-04 00:53:29.744612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-04 00:53:29.744634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-04 00:53:29.744651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-04 00:53:29.744663 | orchestrator | 2025-05-04 00:53:29.744676 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-04 00:53:29.744693 | orchestrator | Sunday 04 May 2025 00:47:31 +0000 (0:00:03.507) 0:01:35.104 ************ 2025-05-04 00:53:29.744753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-04 00:53:29.744855 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.744901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-04 00:53:29.744920 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.744931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-04 00:53:29.744952 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.744962 | orchestrator | 2025-05-04 00:53:29.744973 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-04 00:53:29.744983 | orchestrator | Sunday 04 May 2025 00:47:32 +0000 (0:00:01.296) 0:01:36.400 ************ 2025-05-04 00:53:29.744995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-04 00:53:29.745008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-04 00:53:29.745019 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.745030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-04 00:53:29.745042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-04 00:53:29.745052 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.745063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-04 00:53:29.745079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-04 00:53:29.745091 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.745101 | orchestrator | 2025-05-04 00:53:29.745112 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-04 00:53:29.745122 | orchestrator | Sunday 04 May 2025 00:47:34 +0000 (0:00:01.714) 0:01:38.115 ************ 2025-05-04 00:53:29.745135 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.745154 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.745171 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.745181 | orchestrator | 2025-05-04 00:53:29.745192 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-04 00:53:29.745202 | orchestrator | Sunday 04 May 2025 00:47:34 +0000 (0:00:00.589) 0:01:38.705 ************ 2025-05-04 00:53:29.745221 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.745238 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.745256 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.745269 | orchestrator | 2025-05-04 00:53:29.745279 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-04 00:53:29.745289 | orchestrator | Sunday 04 May 2025 00:47:36 +0000 (0:00:01.493) 0:01:40.199 ************ 2025-05-04 00:53:29.745299 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.745310 | orchestrator | 2025-05-04 00:53:29.745320 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-04 00:53:29.745330 | orchestrator | Sunday 04 May 2025 00:47:36 +0000 (0:00:00.821) 0:01:41.020 ************ 2025-05-04 00:53:29.745340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.745352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.745413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.745436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745584 | orchestrator | 2025-05-04 00:53:29.745607 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-04 00:53:29.745626 | orchestrator | Sunday 04 May 2025 00:47:41 +0000 (0:00:04.591) 0:01:45.612 ************ 2025-05-04 00:53:29.745639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.745652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745706 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.745719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.745732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745795 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.745854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.745871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.745930 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.745943 | orchestrator | 2025-05-04 00:53:29.745956 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-04 00:53:29.745969 | orchestrator | Sunday 04 May 2025 00:47:42 +0000 (0:00:01.105) 0:01:46.718 ************ 2025-05-04 00:53:29.745982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-04 00:53:29.746001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-04 00:53:29.746054 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.746070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-04 00:53:29.746083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-04 00:53:29.746096 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.746109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-04 00:53:29.746122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-04 00:53:29.746135 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.746147 | orchestrator | 2025-05-04 00:53:29.746160 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-04 00:53:29.746173 | orchestrator | Sunday 04 May 2025 00:47:43 +0000 (0:00:01.269) 0:01:47.988 ************ 2025-05-04 00:53:29.746185 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.746198 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.746210 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.746223 | orchestrator | 2025-05-04 00:53:29.746235 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-04 00:53:29.746248 | orchestrator | Sunday 04 May 2025 00:47:45 +0000 (0:00:01.490) 0:01:49.478 ************ 2025-05-04 00:53:29.746260 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.746273 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.746285 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.746298 | orchestrator | 2025-05-04 00:53:29.746310 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-04 00:53:29.746323 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:02.252) 0:01:51.730 ************ 2025-05-04 00:53:29.746335 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.746348 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.746360 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.746373 | orchestrator | 2025-05-04 00:53:29.746386 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-04 00:53:29.746399 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.339) 0:01:52.070 ************ 2025-05-04 00:53:29.746412 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.746467 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.746483 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.746500 | orchestrator | 2025-05-04 00:53:29.746513 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-04 00:53:29.746526 | orchestrator | Sunday 04 May 2025 00:47:48 +0000 (0:00:00.612) 0:01:52.682 ************ 2025-05-04 00:53:29.746539 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.746560 | orchestrator | 2025-05-04 00:53:29.746573 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-04 00:53:29.746585 | orchestrator | Sunday 04 May 2025 00:47:49 +0000 (0:00:01.118) 0:01:53.801 ************ 2025-05-04 00:53:29.746599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 00:53:29.746621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 00:53:29.746635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 00:53:29.746749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 00:53:29.746764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 00:53:29.746849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 00:53:29.746905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.746981 | orchestrator | 2025-05-04 00:53:29.746999 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-04 00:53:29.747013 | orchestrator | Sunday 04 May 2025 00:47:55 +0000 (0:00:05.833) 0:01:59.635 ************ 2025-05-04 00:53:29.747035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 00:53:29.747049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 00:53:29.747069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747150 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.747163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 00:53:29.747183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 00:53:29.747196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747277 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.747290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 00:53:29.747311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 00:53:29.747338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.747421 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.747437 | orchestrator | 2025-05-04 00:53:29.747451 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-04 00:53:29.747463 | orchestrator | Sunday 04 May 2025 00:47:56 +0000 (0:00:00.877) 0:02:00.513 ************ 2025-05-04 00:53:29.747476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-04 00:53:29.747489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-04 00:53:29.747502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-04 00:53:29.747516 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.747529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-04 00:53:29.747541 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.747554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-04 00:53:29.747567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-04 00:53:29.747579 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.747592 | orchestrator | 2025-05-04 00:53:29.747604 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-04 00:53:29.747617 | orchestrator | Sunday 04 May 2025 00:47:57 +0000 (0:00:01.221) 0:02:01.735 ************ 2025-05-04 00:53:29.747630 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.747642 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.747655 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.747667 | orchestrator | 2025-05-04 00:53:29.747680 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-04 00:53:29.747692 | orchestrator | Sunday 04 May 2025 00:47:58 +0000 (0:00:01.208) 0:02:02.943 ************ 2025-05-04 00:53:29.747705 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.747717 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.747730 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.747778 | orchestrator | 2025-05-04 00:53:29.747794 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-04 00:53:29.747827 | orchestrator | Sunday 04 May 2025 00:48:01 +0000 (0:00:02.515) 0:02:05.459 ************ 2025-05-04 00:53:29.747842 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.747855 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.747867 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.747880 | orchestrator | 2025-05-04 00:53:29.747893 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-04 00:53:29.747925 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:00.731) 0:02:06.190 ************ 2025-05-04 00:53:29.747946 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.747959 | orchestrator | 2025-05-04 00:53:29.747972 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-04 00:53:29.747985 | orchestrator | Sunday 04 May 2025 00:48:03 +0000 (0:00:01.260) 0:02:07.451 ************ 2025-05-04 00:53:29.748010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 00:53:29.748025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.748056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 00:53:29.748078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.748109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 00:53:29.748130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.748156 | orchestrator | 2025-05-04 00:53:29.748169 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-04 00:53:29.748182 | orchestrator | Sunday 04 May 2025 00:48:09 +0000 (0:00:05.722) 0:02:13.173 ************ 2025-05-04 00:53:29.748203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 00:53:29.748223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.748245 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.748266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 00:53:29.748294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.748308 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.748322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 00:53:29.748356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.748380 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.748393 | orchestrator | 2025-05-04 00:53:29.748406 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-04 00:53:29.748423 | orchestrator | Sunday 04 May 2025 00:48:12 +0000 (0:00:03.142) 0:02:16.315 ************ 2025-05-04 00:53:29.748476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-04 00:53:29.748491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-04 00:53:29.748504 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.748517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-04 00:53:29.748552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-04 00:53:29.748567 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.748580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-04 00:53:29.748594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-04 00:53:29.748607 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.748620 | orchestrator | 2025-05-04 00:53:29.748632 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-04 00:53:29.748645 | orchestrator | Sunday 04 May 2025 00:48:16 +0000 (0:00:04.407) 0:02:20.723 ************ 2025-05-04 00:53:29.748658 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.748670 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.748683 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.748696 | orchestrator | 2025-05-04 00:53:29.748708 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-04 00:53:29.748721 | orchestrator | Sunday 04 May 2025 00:48:17 +0000 (0:00:01.250) 0:02:21.974 ************ 2025-05-04 00:53:29.748734 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.748747 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.748840 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.748864 | orchestrator | 2025-05-04 00:53:29.748885 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-04 00:53:29.748904 | orchestrator | Sunday 04 May 2025 00:48:19 +0000 (0:00:01.879) 0:02:23.854 ************ 2025-05-04 00:53:29.748922 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.748941 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.748955 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.748968 | orchestrator | 2025-05-04 00:53:29.748981 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-04 00:53:29.748996 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.470) 0:02:24.325 ************ 2025-05-04 00:53:29.749011 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.749025 | orchestrator | 2025-05-04 00:53:29.749039 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-04 00:53:29.749054 | orchestrator | Sunday 04 May 2025 00:48:21 +0000 (0:00:01.030) 0:02:25.355 ************ 2025-05-04 00:53:29.749069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 00:53:29.749094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 00:53:29.749119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 00:53:29.749134 | orchestrator | 2025-05-04 00:53:29.749149 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-04 00:53:29.749163 | orchestrator | Sunday 04 May 2025 00:48:26 +0000 (0:00:05.379) 0:02:30.735 ************ 2025-05-04 00:53:29.749178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 00:53:29.749193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 00:53:29.749208 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.749222 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.749237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 00:53:29.749258 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.749273 | orchestrator | 2025-05-04 00:53:29.749287 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-04 00:53:29.749302 | orchestrator | Sunday 04 May 2025 00:48:27 +0000 (0:00:00.437) 0:02:31.173 ************ 2025-05-04 00:53:29.749316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-04 00:53:29.749338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-04 00:53:29.749353 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.749368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-04 00:53:29.749383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-04 00:53:29.749397 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.749411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-04 00:53:29.749431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-04 00:53:29.749447 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.749461 | orchestrator | 2025-05-04 00:53:29.749476 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-04 00:53:29.749490 | orchestrator | Sunday 04 May 2025 00:48:28 +0000 (0:00:01.129) 0:02:32.302 ************ 2025-05-04 00:53:29.749504 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.749518 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.749533 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.749547 | orchestrator | 2025-05-04 00:53:29.749562 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-04 00:53:29.749576 | orchestrator | Sunday 04 May 2025 00:48:29 +0000 (0:00:01.239) 0:02:33.541 ************ 2025-05-04 00:53:29.749589 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.749603 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.749617 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.749632 | orchestrator | 2025-05-04 00:53:29.749646 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-04 00:53:29.749660 | orchestrator | Sunday 04 May 2025 00:48:31 +0000 (0:00:02.482) 0:02:36.024 ************ 2025-05-04 00:53:29.749674 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.749688 | orchestrator | 2025-05-04 00:53:29.749703 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-04 00:53:29.749717 | orchestrator | Sunday 04 May 2025 00:48:33 +0000 (0:00:01.498) 0:02:37.523 ************ 2025-05-04 00:53:29.749731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.749765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.749782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.749803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.749882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.749912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.749939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.749955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.749970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.749985 | orchestrator | 2025-05-04 00:53:29.750006 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-04 00:53:29.750064 | orchestrator | Sunday 04 May 2025 00:48:41 +0000 (0:00:07.731) 0:02:45.255 ************ 2025-05-04 00:53:29.750083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.750106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.750131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.750146 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.750161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.750184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.750200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.750224 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.750239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.750262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.750276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.750289 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.750302 | orchestrator | 2025-05-04 00:53:29.750315 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-04 00:53:29.750328 | orchestrator | Sunday 04 May 2025 00:48:42 +0000 (0:00:01.070) 0:02:46.325 ************ 2025-05-04 00:53:29.750341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750410 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.750428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750486 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.750498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-04 00:53:29.750549 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.750561 | orchestrator | 2025-05-04 00:53:29.750574 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-04 00:53:29.750587 | orchestrator | Sunday 04 May 2025 00:48:43 +0000 (0:00:01.269) 0:02:47.594 ************ 2025-05-04 00:53:29.750599 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.750612 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.750624 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.750642 | orchestrator | 2025-05-04 00:53:29.750655 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-04 00:53:29.750673 | orchestrator | Sunday 04 May 2025 00:48:44 +0000 (0:00:01.377) 0:02:48.972 ************ 2025-05-04 00:53:29.750690 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.750702 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.750715 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.750728 | orchestrator | 2025-05-04 00:53:29.750746 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-04 00:53:29.750759 | orchestrator | Sunday 04 May 2025 00:48:47 +0000 (0:00:02.235) 0:02:51.208 ************ 2025-05-04 00:53:29.750771 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.750784 | orchestrator | 2025-05-04 00:53:29.750797 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-04 00:53:29.750833 | orchestrator | Sunday 04 May 2025 00:48:48 +0000 (0:00:01.092) 0:02:52.301 ************ 2025-05-04 00:53:29.750866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:53:29.750889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:53:29.750919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:53:29.750948 | orchestrator | 2025-05-04 00:53:29.750961 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-04 00:53:29.750974 | orchestrator | Sunday 04 May 2025 00:48:52 +0000 (0:00:04.579) 0:02:56.880 ************ 2025-05-04 00:53:29.750988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:53:29.751012 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.751042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:53:29.751057 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.751071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:53:29.751099 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.751113 | orchestrator | 2025-05-04 00:53:29.751131 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-04 00:53:29.751144 | orchestrator | Sunday 04 May 2025 00:48:53 +0000 (0:00:00.879) 0:02:57.760 ************ 2025-05-04 00:53:29.751158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-04 00:53:29.751177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-04 00:53:29.751191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-04 00:53:29.751206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-04 00:53:29.751219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-04 00:53:29.751232 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.751251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-04 00:53:29.751264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-04 00:53:29.751277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-04 00:53:29.751290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-04 00:53:29.751309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-04 00:53:29.751322 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.751335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-04 00:53:29.751352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-04 00:53:29.751370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-04 00:53:29.751383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-04 00:53:29.751396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-04 00:53:29.751409 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.751421 | orchestrator | 2025-05-04 00:53:29.751434 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-04 00:53:29.751447 | orchestrator | Sunday 04 May 2025 00:48:55 +0000 (0:00:01.569) 0:02:59.329 ************ 2025-05-04 00:53:29.751460 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.751473 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.751485 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.751498 | orchestrator | 2025-05-04 00:53:29.751511 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-04 00:53:29.751523 | orchestrator | Sunday 04 May 2025 00:48:56 +0000 (0:00:01.416) 0:03:00.745 ************ 2025-05-04 00:53:29.751536 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.751548 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.751561 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.751574 | orchestrator | 2025-05-04 00:53:29.751586 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-04 00:53:29.751599 | orchestrator | Sunday 04 May 2025 00:48:59 +0000 (0:00:02.349) 0:03:03.095 ************ 2025-05-04 00:53:29.751611 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.751624 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.751636 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.751649 | orchestrator | 2025-05-04 00:53:29.751662 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-04 00:53:29.751675 | orchestrator | Sunday 04 May 2025 00:48:59 +0000 (0:00:00.522) 0:03:03.618 ************ 2025-05-04 00:53:29.751687 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.751745 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.751759 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.751771 | orchestrator | 2025-05-04 00:53:29.751784 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-04 00:53:29.751804 | orchestrator | Sunday 04 May 2025 00:48:59 +0000 (0:00:00.310) 0:03:03.928 ************ 2025-05-04 00:53:29.751840 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.751853 | orchestrator | 2025-05-04 00:53:29.751866 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-04 00:53:29.751879 | orchestrator | Sunday 04 May 2025 00:49:01 +0000 (0:00:01.323) 0:03:05.251 ************ 2025-05-04 00:53:29.751892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:53:29.751907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:53:29.752008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:53:29.752025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:53:29.752039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:53:29.752061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:53:29.752075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:53:29.752095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:53:29.752109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:53:29.752122 | orchestrator | 2025-05-04 00:53:29.752134 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-04 00:53:29.752147 | orchestrator | Sunday 04 May 2025 00:49:05 +0000 (0:00:04.368) 0:03:09.620 ************ 2025-05-04 00:53:29.752160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:53:29.752180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:53:29.752194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:53:29.752207 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.752226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:53:29.752240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:53:29.752254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:53:29.752273 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.752287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:53:29.752301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:53:29.752314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:53:29.752327 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.752339 | orchestrator | 2025-05-04 00:53:29.752352 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-04 00:53:29.752365 | orchestrator | Sunday 04 May 2025 00:49:06 +0000 (0:00:01.021) 0:03:10.641 ************ 2025-05-04 00:53:29.752383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-04 00:53:29.752432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-04 00:53:29.752446 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.752459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-04 00:53:29.752479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-04 00:53:29.752492 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.752505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-04 00:53:29.752517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-04 00:53:29.752530 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.752543 | orchestrator | 2025-05-04 00:53:29.752556 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-04 00:53:29.752568 | orchestrator | Sunday 04 May 2025 00:49:07 +0000 (0:00:01.014) 0:03:11.656 ************ 2025-05-04 00:53:29.752580 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.752593 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.752606 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.752618 | orchestrator | 2025-05-04 00:53:29.752631 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-04 00:53:29.752644 | orchestrator | Sunday 04 May 2025 00:49:09 +0000 (0:00:01.439) 0:03:13.096 ************ 2025-05-04 00:53:29.752656 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.752669 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.752681 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.752694 | orchestrator | 2025-05-04 00:53:29.752707 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-04 00:53:29.752719 | orchestrator | Sunday 04 May 2025 00:49:11 +0000 (0:00:02.315) 0:03:15.412 ************ 2025-05-04 00:53:29.752732 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.752744 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.752757 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.752770 | orchestrator | 2025-05-04 00:53:29.752783 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-04 00:53:29.752796 | orchestrator | Sunday 04 May 2025 00:49:11 +0000 (0:00:00.298) 0:03:15.710 ************ 2025-05-04 00:53:29.752942 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.752989 | orchestrator | 2025-05-04 00:53:29.753002 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-04 00:53:29.753015 | orchestrator | Sunday 04 May 2025 00:49:12 +0000 (0:00:01.321) 0:03:17.032 ************ 2025-05-04 00:53:29.753029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 00:53:29.753056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 00:53:29.753097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 00:53:29.753124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753143 | orchestrator | 2025-05-04 00:53:29.753156 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-04 00:53:29.753169 | orchestrator | Sunday 04 May 2025 00:49:16 +0000 (0:00:03.977) 0:03:21.010 ************ 2025-05-04 00:53:29.753188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 00:53:29.753202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753215 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.753229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 00:53:29.753242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753255 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.753274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 00:53:29.753294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753307 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.753320 | orchestrator | 2025-05-04 00:53:29.753332 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-04 00:53:29.753345 | orchestrator | Sunday 04 May 2025 00:49:17 +0000 (0:00:00.911) 0:03:21.921 ************ 2025-05-04 00:53:29.753359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-04 00:53:29.753372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-04 00:53:29.753392 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.753402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-04 00:53:29.753413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-04 00:53:29.753423 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.753434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-04 00:53:29.753444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-04 00:53:29.753454 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.753464 | orchestrator | 2025-05-04 00:53:29.753475 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-04 00:53:29.753485 | orchestrator | Sunday 04 May 2025 00:49:18 +0000 (0:00:01.032) 0:03:22.954 ************ 2025-05-04 00:53:29.753495 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.753506 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.753516 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.753526 | orchestrator | 2025-05-04 00:53:29.753537 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-04 00:53:29.753547 | orchestrator | Sunday 04 May 2025 00:49:20 +0000 (0:00:01.292) 0:03:24.247 ************ 2025-05-04 00:53:29.753557 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.753573 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.753583 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.753594 | orchestrator | 2025-05-04 00:53:29.753604 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-04 00:53:29.753614 | orchestrator | Sunday 04 May 2025 00:49:22 +0000 (0:00:02.023) 0:03:26.271 ************ 2025-05-04 00:53:29.753625 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.753635 | orchestrator | 2025-05-04 00:53:29.753645 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-04 00:53:29.753655 | orchestrator | Sunday 04 May 2025 00:49:23 +0000 (0:00:01.476) 0:03:27.747 ************ 2025-05-04 00:53:29.753671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-04 00:53:29.753682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-04 00:53:29.753733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-04 00:53:29.753783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.753859 | orchestrator | 2025-05-04 00:53:29.753870 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-04 00:53:29.753881 | orchestrator | Sunday 04 May 2025 00:49:29 +0000 (0:00:05.394) 0:03:33.141 ************ 2025-05-04 00:53:29.753898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-04 00:53:29.754105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754155 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.754167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-04 00:53:29.754179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754281 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.754291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-04 00:53:29.754302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.754342 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.754353 | orchestrator | 2025-05-04 00:53:29.754363 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-04 00:53:29.754374 | orchestrator | Sunday 04 May 2025 00:49:29 +0000 (0:00:00.778) 0:03:33.920 ************ 2025-05-04 00:53:29.754384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-04 00:53:29.754445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-04 00:53:29.754461 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.754481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-04 00:53:29.754492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-04 00:53:29.754503 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.754515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-04 00:53:29.754526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-04 00:53:29.754537 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.754547 | orchestrator | 2025-05-04 00:53:29.754558 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-04 00:53:29.754569 | orchestrator | Sunday 04 May 2025 00:49:30 +0000 (0:00:01.100) 0:03:35.020 ************ 2025-05-04 00:53:29.754601 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.754618 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.754628 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.754638 | orchestrator | 2025-05-04 00:53:29.754649 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-04 00:53:29.754659 | orchestrator | Sunday 04 May 2025 00:49:32 +0000 (0:00:01.273) 0:03:36.294 ************ 2025-05-04 00:53:29.754669 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.754679 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.754689 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.754700 | orchestrator | 2025-05-04 00:53:29.754710 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-04 00:53:29.754720 | orchestrator | Sunday 04 May 2025 00:49:34 +0000 (0:00:02.177) 0:03:38.471 ************ 2025-05-04 00:53:29.754730 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.754741 | orchestrator | 2025-05-04 00:53:29.754751 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-04 00:53:29.754761 | orchestrator | Sunday 04 May 2025 00:49:35 +0000 (0:00:01.603) 0:03:40.075 ************ 2025-05-04 00:53:29.754772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:53:29.754782 | orchestrator | 2025-05-04 00:53:29.754792 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-04 00:53:29.754803 | orchestrator | Sunday 04 May 2025 00:49:39 +0000 (0:00:03.587) 0:03:43.663 ************ 2025-05-04 00:53:29.754844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-04 00:53:29.754923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-04 00:53:29.754947 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.754959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-04 00:53:29.754971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-04 00:53:29.754982 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.755073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-04 00:53:29.755112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-04 00:53:29.755129 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.755144 | orchestrator | 2025-05-04 00:53:29.755159 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-04 00:53:29.755176 | orchestrator | Sunday 04 May 2025 00:49:43 +0000 (0:00:03.590) 0:03:47.254 ************ 2025-05-04 00:53:29.755196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-04 00:53:29.755331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-04 00:53:29.755385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-04 00:53:29.755398 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.755410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-04 00:53:29.755422 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.755498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-04 00:53:29.755523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-04 00:53:29.755535 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.755547 | orchestrator | 2025-05-04 00:53:29.755558 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-04 00:53:29.755570 | orchestrator | Sunday 04 May 2025 00:49:46 +0000 (0:00:03.498) 0:03:50.752 ************ 2025-05-04 00:53:29.755582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-04 00:53:29.755594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-04 00:53:29.755606 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.755618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-04 00:53:29.755631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-04 00:53:29.755643 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.755746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-04 00:53:29.755776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-04 00:53:29.755789 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.755803 | orchestrator | 2025-05-04 00:53:29.755841 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-04 00:53:29.755853 | orchestrator | Sunday 04 May 2025 00:49:49 +0000 (0:00:03.297) 0:03:54.049 ************ 2025-05-04 00:53:29.755865 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.755877 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.755888 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.755899 | orchestrator | 2025-05-04 00:53:29.755911 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-04 00:53:29.755928 | orchestrator | Sunday 04 May 2025 00:49:52 +0000 (0:00:02.286) 0:03:56.336 ************ 2025-05-04 00:53:29.755939 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.755951 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.755963 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.755974 | orchestrator | 2025-05-04 00:53:29.755986 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-04 00:53:29.755997 | orchestrator | Sunday 04 May 2025 00:49:54 +0000 (0:00:02.239) 0:03:58.575 ************ 2025-05-04 00:53:29.756009 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.756020 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.756046 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.756059 | orchestrator | 2025-05-04 00:53:29.756070 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-04 00:53:29.756082 | orchestrator | Sunday 04 May 2025 00:49:54 +0000 (0:00:00.317) 0:03:58.892 ************ 2025-05-04 00:53:29.756093 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.756105 | orchestrator | 2025-05-04 00:53:29.756116 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-04 00:53:29.756128 | orchestrator | Sunday 04 May 2025 00:49:56 +0000 (0:00:01.553) 0:04:00.446 ************ 2025-05-04 00:53:29.756140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-04 00:53:29.756153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-04 00:53:29.756243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-04 00:53:29.756260 | orchestrator | 2025-05-04 00:53:29.756272 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-04 00:53:29.756284 | orchestrator | Sunday 04 May 2025 00:49:58 +0000 (0:00:01.737) 0:04:02.184 ************ 2025-05-04 00:53:29.756296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-04 00:53:29.756319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-04 00:53:29.756332 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.756344 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.756355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-04 00:53:29.756374 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.756386 | orchestrator | 2025-05-04 00:53:29.756397 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-04 00:53:29.756409 | orchestrator | Sunday 04 May 2025 00:49:58 +0000 (0:00:00.600) 0:04:02.784 ************ 2025-05-04 00:53:29.756420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-04 00:53:29.756433 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.756445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-04 00:53:29.756456 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.756468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-04 00:53:29.756480 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.756492 | orchestrator | 2025-05-04 00:53:29.756562 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-04 00:53:29.756579 | orchestrator | Sunday 04 May 2025 00:49:59 +0000 (0:00:00.825) 0:04:03.610 ************ 2025-05-04 00:53:29.756590 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.756602 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.756613 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.756625 | orchestrator | 2025-05-04 00:53:29.756636 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-04 00:53:29.756648 | orchestrator | Sunday 04 May 2025 00:50:00 +0000 (0:00:00.736) 0:04:04.346 ************ 2025-05-04 00:53:29.756659 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.756670 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.756681 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.756693 | orchestrator | 2025-05-04 00:53:29.756704 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-04 00:53:29.756716 | orchestrator | Sunday 04 May 2025 00:50:02 +0000 (0:00:01.984) 0:04:06.330 ************ 2025-05-04 00:53:29.756727 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.756739 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.756750 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.756762 | orchestrator | 2025-05-04 00:53:29.756773 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-04 00:53:29.756785 | orchestrator | Sunday 04 May 2025 00:50:02 +0000 (0:00:00.321) 0:04:06.652 ************ 2025-05-04 00:53:29.756797 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.756828 | orchestrator | 2025-05-04 00:53:29.756840 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-04 00:53:29.756852 | orchestrator | Sunday 04 May 2025 00:50:04 +0000 (0:00:01.572) 0:04:08.224 ************ 2025-05-04 00:53:29.756879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 00:53:29.756900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.756913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.756989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 00:53:29.757031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 00:53:29.757050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.757144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.757183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 00:53:29.757227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.757314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.757327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.757368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.757380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.757477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.757515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.757550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.757563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.757633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.757671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.757706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.757793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 00:53:29.757904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.757959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 00:53:29.758067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.758139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.758152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.758177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.758279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.758300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.758336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.758349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758361 | orchestrator | 2025-05-04 00:53:29.758373 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-04 00:53:29.758386 | orchestrator | Sunday 04 May 2025 00:50:09 +0000 (0:00:05.525) 0:04:13.750 ************ 2025-05-04 00:53:29.758469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 00:53:29.758506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 00:53:29.758649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.758692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.758717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 00:53:29.758731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.758887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.758928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.758942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 00:53:29.759049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.759071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.759128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.759144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.759159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.759249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759336 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.759356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.759378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.759544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 00:53:29.759568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.759598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.759733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.759778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 00:53:29.759792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759825 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.759859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.759883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.759990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.760041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.760064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.760086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.760109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.760139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 00:53:29.760154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.760262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 00:53:29.760283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 00:53:29.760317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.760331 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.760344 | orchestrator | 2025-05-04 00:53:29.760358 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-04 00:53:29.760376 | orchestrator | Sunday 04 May 2025 00:50:11 +0000 (0:00:01.694) 0:04:15.444 ************ 2025-05-04 00:53:29.760390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-04 00:53:29.760416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-04 00:53:29.760440 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.760470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-04 00:53:29.760492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-04 00:53:29.760514 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.760537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-04 00:53:29.760559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-04 00:53:29.760583 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.760606 | orchestrator | 2025-05-04 00:53:29.760628 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-04 00:53:29.760652 | orchestrator | Sunday 04 May 2025 00:50:13 +0000 (0:00:01.953) 0:04:17.398 ************ 2025-05-04 00:53:29.760669 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.760682 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.760742 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.760765 | orchestrator | 2025-05-04 00:53:29.760778 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-04 00:53:29.760791 | orchestrator | Sunday 04 May 2025 00:50:14 +0000 (0:00:01.533) 0:04:18.932 ************ 2025-05-04 00:53:29.760804 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.760842 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.760855 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.760870 | orchestrator | 2025-05-04 00:53:29.760886 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-04 00:53:29.760900 | orchestrator | Sunday 04 May 2025 00:50:17 +0000 (0:00:02.450) 0:04:21.383 ************ 2025-05-04 00:53:29.760914 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.760929 | orchestrator | 2025-05-04 00:53:29.760944 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-04 00:53:29.760958 | orchestrator | Sunday 04 May 2025 00:50:18 +0000 (0:00:01.627) 0:04:23.010 ************ 2025-05-04 00:53:29.760974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.761002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.761031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.761053 | orchestrator | 2025-05-04 00:53:29.761075 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-04 00:53:29.761095 | orchestrator | Sunday 04 May 2025 00:50:23 +0000 (0:00:04.186) 0:04:27.197 ************ 2025-05-04 00:53:29.761171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.761199 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.761218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.761234 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.761249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.761270 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.761283 | orchestrator | 2025-05-04 00:53:29.761296 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-04 00:53:29.761309 | orchestrator | Sunday 04 May 2025 00:50:23 +0000 (0:00:00.484) 0:04:27.682 ************ 2025-05-04 00:53:29.761322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-04 00:53:29.761335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-04 00:53:29.761349 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.761362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-04 00:53:29.761375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-04 00:53:29.761388 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.761401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-04 00:53:29.761414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-04 00:53:29.761427 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.761440 | orchestrator | 2025-05-04 00:53:29.761453 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-04 00:53:29.761498 | orchestrator | Sunday 04 May 2025 00:50:24 +0000 (0:00:01.365) 0:04:29.047 ************ 2025-05-04 00:53:29.761513 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.761526 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.761539 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.761551 | orchestrator | 2025-05-04 00:53:29.761564 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-04 00:53:29.761577 | orchestrator | Sunday 04 May 2025 00:50:26 +0000 (0:00:01.393) 0:04:30.441 ************ 2025-05-04 00:53:29.761589 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.761602 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.761614 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.761627 | orchestrator | 2025-05-04 00:53:29.761640 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-04 00:53:29.761652 | orchestrator | Sunday 04 May 2025 00:50:28 +0000 (0:00:02.213) 0:04:32.654 ************ 2025-05-04 00:53:29.761665 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.761684 | orchestrator | 2025-05-04 00:53:29.761697 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-04 00:53:29.761710 | orchestrator | Sunday 04 May 2025 00:50:30 +0000 (0:00:01.647) 0:04:34.301 ************ 2025-05-04 00:53:29.761742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.761757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.761770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.761858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.761896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.761911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.761924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.761938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.761980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.761995 | orchestrator | 2025-05-04 00:53:29.762008 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-04 00:53:29.762050 | orchestrator | Sunday 04 May 2025 00:50:35 +0000 (0:00:05.622) 0:04:39.923 ************ 2025-05-04 00:53:29.762080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.762096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.762111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.762127 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.762142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.762196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.762221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.762237 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.762251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.762266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.762281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.762295 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.762308 | orchestrator | 2025-05-04 00:53:29.762322 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-04 00:53:29.762336 | orchestrator | Sunday 04 May 2025 00:50:37 +0000 (0:00:01.236) 0:04:41.160 ************ 2025-05-04 00:53:29.762350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762440 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.762455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762510 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.762524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-04 00:53:29.762579 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.762593 | orchestrator | 2025-05-04 00:53:29.762606 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-04 00:53:29.762625 | orchestrator | Sunday 04 May 2025 00:50:38 +0000 (0:00:01.355) 0:04:42.516 ************ 2025-05-04 00:53:29.762647 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.762669 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.762689 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.762708 | orchestrator | 2025-05-04 00:53:29.762730 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-04 00:53:29.762752 | orchestrator | Sunday 04 May 2025 00:50:39 +0000 (0:00:01.541) 0:04:44.057 ************ 2025-05-04 00:53:29.762773 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.762796 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.762839 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.762853 | orchestrator | 2025-05-04 00:53:29.762866 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-04 00:53:29.762888 | orchestrator | Sunday 04 May 2025 00:50:42 +0000 (0:00:02.537) 0:04:46.595 ************ 2025-05-04 00:53:29.762901 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.762914 | orchestrator | 2025-05-04 00:53:29.762933 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-04 00:53:29.762947 | orchestrator | Sunday 04 May 2025 00:50:44 +0000 (0:00:01.731) 0:04:48.327 ************ 2025-05-04 00:53:29.762960 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-04 00:53:29.762974 | orchestrator | 2025-05-04 00:53:29.762986 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-04 00:53:29.762999 | orchestrator | Sunday 04 May 2025 00:50:45 +0000 (0:00:01.353) 0:04:49.680 ************ 2025-05-04 00:53:29.763068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-04 00:53:29.763112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-04 00:53:29.763136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-04 00:53:29.763159 | orchestrator | 2025-05-04 00:53:29.763180 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-04 00:53:29.763201 | orchestrator | Sunday 04 May 2025 00:50:51 +0000 (0:00:05.617) 0:04:55.298 ************ 2025-05-04 00:53:29.763224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.763246 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.763268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.763283 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.763296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.763317 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.763330 | orchestrator | 2025-05-04 00:53:29.763342 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-04 00:53:29.763355 | orchestrator | Sunday 04 May 2025 00:50:53 +0000 (0:00:02.263) 0:04:57.562 ************ 2025-05-04 00:53:29.763368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-04 00:53:29.763381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-04 00:53:29.763395 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.763407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-04 00:53:29.763461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-04 00:53:29.763476 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.763489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-04 00:53:29.763502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-04 00:53:29.763515 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.763528 | orchestrator | 2025-05-04 00:53:29.763541 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-04 00:53:29.763554 | orchestrator | Sunday 04 May 2025 00:50:55 +0000 (0:00:02.321) 0:04:59.884 ************ 2025-05-04 00:53:29.763566 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.763579 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.763592 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.763604 | orchestrator | 2025-05-04 00:53:29.763617 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-04 00:53:29.763630 | orchestrator | Sunday 04 May 2025 00:50:59 +0000 (0:00:03.214) 0:05:03.098 ************ 2025-05-04 00:53:29.763642 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.763655 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.763668 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.763681 | orchestrator | 2025-05-04 00:53:29.763693 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-04 00:53:29.763706 | orchestrator | Sunday 04 May 2025 00:51:02 +0000 (0:00:03.931) 0:05:07.030 ************ 2025-05-04 00:53:29.763723 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-04 00:53:29.763737 | orchestrator | 2025-05-04 00:53:29.763749 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-04 00:53:29.763769 | orchestrator | Sunday 04 May 2025 00:51:04 +0000 (0:00:01.389) 0:05:08.419 ************ 2025-05-04 00:53:29.763792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.763876 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.763901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.763923 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.763960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.763979 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.763996 | orchestrator | 2025-05-04 00:53:29.764014 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-04 00:53:29.764026 | orchestrator | Sunday 04 May 2025 00:51:06 +0000 (0:00:01.779) 0:05:10.198 ************ 2025-05-04 00:53:29.764073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.764085 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.764096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.764107 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.764117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-04 00:53:29.764140 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.764150 | orchestrator | 2025-05-04 00:53:29.764161 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-04 00:53:29.764171 | orchestrator | Sunday 04 May 2025 00:51:08 +0000 (0:00:01.950) 0:05:12.148 ************ 2025-05-04 00:53:29.764182 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.764192 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.764203 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.764213 | orchestrator | 2025-05-04 00:53:29.764223 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-04 00:53:29.764234 | orchestrator | Sunday 04 May 2025 00:51:10 +0000 (0:00:02.055) 0:05:14.204 ************ 2025-05-04 00:53:29.764244 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.764255 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.764265 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.764280 | orchestrator | 2025-05-04 00:53:29.764291 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-04 00:53:29.764302 | orchestrator | Sunday 04 May 2025 00:51:13 +0000 (0:00:03.103) 0:05:17.308 ************ 2025-05-04 00:53:29.764312 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.764322 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.764332 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.764342 | orchestrator | 2025-05-04 00:53:29.764353 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-04 00:53:29.764363 | orchestrator | Sunday 04 May 2025 00:51:17 +0000 (0:00:03.936) 0:05:21.245 ************ 2025-05-04 00:53:29.764374 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-04 00:53:29.764385 | orchestrator | 2025-05-04 00:53:29.764395 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-04 00:53:29.764405 | orchestrator | Sunday 04 May 2025 00:51:18 +0000 (0:00:01.615) 0:05:22.861 ************ 2025-05-04 00:53:29.764416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-04 00:53:29.764427 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.764437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-04 00:53:29.764448 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.764494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-04 00:53:29.764508 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.764518 | orchestrator | 2025-05-04 00:53:29.764529 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-04 00:53:29.764545 | orchestrator | Sunday 04 May 2025 00:51:20 +0000 (0:00:02.016) 0:05:24.877 ************ 2025-05-04 00:53:29.764556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-04 00:53:29.764567 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.764577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-04 00:53:29.764588 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.764598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-04 00:53:29.764609 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.764620 | orchestrator | 2025-05-04 00:53:29.764630 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-04 00:53:29.764640 | orchestrator | Sunday 04 May 2025 00:51:22 +0000 (0:00:01.568) 0:05:26.446 ************ 2025-05-04 00:53:29.764651 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.764661 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.764671 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.764681 | orchestrator | 2025-05-04 00:53:29.764692 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-04 00:53:29.764702 | orchestrator | Sunday 04 May 2025 00:51:24 +0000 (0:00:02.517) 0:05:28.964 ************ 2025-05-04 00:53:29.764712 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.764723 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.764733 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.764743 | orchestrator | 2025-05-04 00:53:29.764753 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-04 00:53:29.764768 | orchestrator | Sunday 04 May 2025 00:51:27 +0000 (0:00:03.066) 0:05:32.031 ************ 2025-05-04 00:53:29.764779 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.764789 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.764800 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.764939 | orchestrator | 2025-05-04 00:53:29.764972 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-04 00:53:29.764983 | orchestrator | Sunday 04 May 2025 00:51:31 +0000 (0:00:04.012) 0:05:36.043 ************ 2025-05-04 00:53:29.764993 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.765004 | orchestrator | 2025-05-04 00:53:29.765014 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-04 00:53:29.765025 | orchestrator | Sunday 04 May 2025 00:51:33 +0000 (0:00:01.831) 0:05:37.874 ************ 2025-05-04 00:53:29.765088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.765112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-04 00:53:29.765124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.765172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.765189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-04 00:53:29.765227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.765270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.765281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-04 00:53:29.765299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.765367 | orchestrator | 2025-05-04 00:53:29.765377 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-04 00:53:29.765386 | orchestrator | Sunday 04 May 2025 00:51:38 +0000 (0:00:04.677) 0:05:42.552 ************ 2025-05-04 00:53:29.765395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.765405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-04 00:53:29.765414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.765459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-04 00:53:29.765469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.765504 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.765513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.765539 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.765574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.765586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-04 00:53:29.765595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-04 00:53:29.765622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-04 00:53:29.765631 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.765640 | orchestrator | 2025-05-04 00:53:29.765650 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-04 00:53:29.765659 | orchestrator | Sunday 04 May 2025 00:51:39 +0000 (0:00:01.077) 0:05:43.630 ************ 2025-05-04 00:53:29.765668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-04 00:53:29.765678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-04 00:53:29.765687 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.765696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-04 00:53:29.765705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-04 00:53:29.765714 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.765742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-04 00:53:29.765753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-04 00:53:29.765762 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.765771 | orchestrator | 2025-05-04 00:53:29.765780 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-04 00:53:29.765789 | orchestrator | Sunday 04 May 2025 00:51:41 +0000 (0:00:01.528) 0:05:45.159 ************ 2025-05-04 00:53:29.765798 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.765827 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.765836 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.765845 | orchestrator | 2025-05-04 00:53:29.765854 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-04 00:53:29.765863 | orchestrator | Sunday 04 May 2025 00:51:42 +0000 (0:00:01.595) 0:05:46.754 ************ 2025-05-04 00:53:29.765871 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.765880 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.765889 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.765898 | orchestrator | 2025-05-04 00:53:29.765906 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-04 00:53:29.765915 | orchestrator | Sunday 04 May 2025 00:51:45 +0000 (0:00:03.089) 0:05:49.844 ************ 2025-05-04 00:53:29.765924 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.765932 | orchestrator | 2025-05-04 00:53:29.765941 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-04 00:53:29.765950 | orchestrator | Sunday 04 May 2025 00:51:47 +0000 (0:00:01.972) 0:05:51.816 ************ 2025-05-04 00:53:29.765965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:53:29.765981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:53:29.765991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:53:29.766069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:53:29.766083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:53:29.766106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:53:29.766116 | orchestrator | 2025-05-04 00:53:29.766124 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-04 00:53:29.766133 | orchestrator | Sunday 04 May 2025 00:51:54 +0000 (0:00:07.005) 0:05:58.822 ************ 2025-05-04 00:53:29.766164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:53:29.766175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:53:29.766196 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.766206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:53:29.766215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:53:29.766225 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.766234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:53:29.766264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:53:29.766286 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.766295 | orchestrator | 2025-05-04 00:53:29.766304 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-04 00:53:29.766313 | orchestrator | Sunday 04 May 2025 00:51:55 +0000 (0:00:01.008) 0:05:59.831 ************ 2025-05-04 00:53:29.766322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-04 00:53:29.766331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-04 00:53:29.766340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-04 00:53:29.766350 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.766363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-04 00:53:29.766389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-04 00:53:29.766400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-04 00:53:29.766409 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.766418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-04 00:53:29.766427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-04 00:53:29.766435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-04 00:53:29.766444 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.766453 | orchestrator | 2025-05-04 00:53:29.766462 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-04 00:53:29.766471 | orchestrator | Sunday 04 May 2025 00:51:57 +0000 (0:00:01.602) 0:06:01.434 ************ 2025-05-04 00:53:29.766479 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.766488 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.766497 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.766505 | orchestrator | 2025-05-04 00:53:29.766514 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-04 00:53:29.766522 | orchestrator | Sunday 04 May 2025 00:51:57 +0000 (0:00:00.458) 0:06:01.892 ************ 2025-05-04 00:53:29.766531 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.766539 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.766548 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.766557 | orchestrator | 2025-05-04 00:53:29.766566 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-04 00:53:29.766574 | orchestrator | Sunday 04 May 2025 00:51:59 +0000 (0:00:01.863) 0:06:03.755 ************ 2025-05-04 00:53:29.766609 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.766620 | orchestrator | 2025-05-04 00:53:29.766629 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-04 00:53:29.766637 | orchestrator | Sunday 04 May 2025 00:52:01 +0000 (0:00:01.976) 0:06:05.732 ************ 2025-05-04 00:53:29.766647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 00:53:29.766656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 00:53:29.766666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.766695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 00:53:29.766732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 00:53:29.766750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.766779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 00:53:29.766788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 00:53:29.766797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.766879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 00:53:29.766888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 00:53:29.766898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.766957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 00:53:29.766977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 00:53:29.766986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.766995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 00:53:29.767047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 00:53:29.767056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767110 | orchestrator | 2025-05-04 00:53:29.767119 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-04 00:53:29.767128 | orchestrator | Sunday 04 May 2025 00:52:06 +0000 (0:00:05.031) 0:06:10.763 ************ 2025-05-04 00:53:29.767138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 00:53:29.767147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 00:53:29.767156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 00:53:29.767209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 00:53:29.767218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767266 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.767279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 00:53:29.767289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 00:53:29.767298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 00:53:29.767348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 00:53:29.767358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 00:53:29.767392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 00:53:29.767415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767433 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.767446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 00:53:29.767557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 00:53:29.767568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 00:53:29.767601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 00:53:29.767610 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.767619 | orchestrator | 2025-05-04 00:53:29.767628 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-04 00:53:29.767636 | orchestrator | Sunday 04 May 2025 00:52:08 +0000 (0:00:01.640) 0:06:12.404 ************ 2025-05-04 00:53:29.767646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-04 00:53:29.767655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-04 00:53:29.767665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-04 00:53:29.767679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-04 00:53:29.767692 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.767701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-04 00:53:29.767710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-04 00:53:29.767719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-04 00:53:29.767728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-04 00:53:29.767737 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.767746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-04 00:53:29.767758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-04 00:53:29.767767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-04 00:53:29.767779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-04 00:53:29.767788 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.767797 | orchestrator | 2025-05-04 00:53:29.767820 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-04 00:53:29.767830 | orchestrator | Sunday 04 May 2025 00:52:10 +0000 (0:00:01.702) 0:06:14.107 ************ 2025-05-04 00:53:29.767838 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.767847 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.767856 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.767869 | orchestrator | 2025-05-04 00:53:29.767878 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-04 00:53:29.767886 | orchestrator | Sunday 04 May 2025 00:52:10 +0000 (0:00:00.777) 0:06:14.884 ************ 2025-05-04 00:53:29.767895 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.767903 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.767912 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.767920 | orchestrator | 2025-05-04 00:53:29.767929 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-04 00:53:29.767945 | orchestrator | Sunday 04 May 2025 00:52:12 +0000 (0:00:02.183) 0:06:17.067 ************ 2025-05-04 00:53:29.767954 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.767963 | orchestrator | 2025-05-04 00:53:29.767971 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-04 00:53:29.767980 | orchestrator | Sunday 04 May 2025 00:52:14 +0000 (0:00:01.964) 0:06:19.032 ************ 2025-05-04 00:53:29.767989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:53:29.767999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:53:29.768012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-04 00:53:29.768022 | orchestrator | 2025-05-04 00:53:29.768031 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-04 00:53:29.768040 | orchestrator | Sunday 04 May 2025 00:52:18 +0000 (0:00:03.165) 0:06:22.198 ************ 2025-05-04 00:53:29.768049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-04 00:53:29.768062 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-04 00:53:29.768081 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-04 00:53:29.768099 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768108 | orchestrator | 2025-05-04 00:53:29.768116 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-04 00:53:29.768125 | orchestrator | Sunday 04 May 2025 00:52:18 +0000 (0:00:00.727) 0:06:22.925 ************ 2025-05-04 00:53:29.768134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-04 00:53:29.768143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-04 00:53:29.768151 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768160 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-04 00:53:29.768181 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768190 | orchestrator | 2025-05-04 00:53:29.768199 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-04 00:53:29.768212 | orchestrator | Sunday 04 May 2025 00:52:20 +0000 (0:00:01.237) 0:06:24.163 ************ 2025-05-04 00:53:29.768221 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768230 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768239 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768247 | orchestrator | 2025-05-04 00:53:29.768256 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-04 00:53:29.768265 | orchestrator | Sunday 04 May 2025 00:52:20 +0000 (0:00:00.441) 0:06:24.605 ************ 2025-05-04 00:53:29.768273 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768282 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768290 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768299 | orchestrator | 2025-05-04 00:53:29.768308 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-04 00:53:29.768316 | orchestrator | Sunday 04 May 2025 00:52:22 +0000 (0:00:02.054) 0:06:26.659 ************ 2025-05-04 00:53:29.768325 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:53:29.768334 | orchestrator | 2025-05-04 00:53:29.768343 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-04 00:53:29.768351 | orchestrator | Sunday 04 May 2025 00:52:24 +0000 (0:00:02.141) 0:06:28.801 ************ 2025-05-04 00:53:29.768360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.768370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.768379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.768398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.768408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.768418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-04 00:53:29.768427 | orchestrator | 2025-05-04 00:53:29.768436 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-04 00:53:29.768444 | orchestrator | Sunday 04 May 2025 00:52:32 +0000 (0:00:07.441) 0:06:36.242 ************ 2025-05-04 00:53:29.768453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.768471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.768480 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.768498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.768507 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.768538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-04 00:53:29.768548 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768557 | orchestrator | 2025-05-04 00:53:29.768566 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-04 00:53:29.768575 | orchestrator | Sunday 04 May 2025 00:52:33 +0000 (0:00:01.297) 0:06:37.540 ************ 2025-05-04 00:53:29.768584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768629 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768664 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-04 00:53:29.768713 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768722 | orchestrator | 2025-05-04 00:53:29.768731 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-04 00:53:29.768739 | orchestrator | Sunday 04 May 2025 00:52:34 +0000 (0:00:01.528) 0:06:39.069 ************ 2025-05-04 00:53:29.768748 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.768756 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.768765 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.768774 | orchestrator | 2025-05-04 00:53:29.768782 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-04 00:53:29.768794 | orchestrator | Sunday 04 May 2025 00:52:36 +0000 (0:00:01.558) 0:06:40.627 ************ 2025-05-04 00:53:29.768803 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.768866 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.768875 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.768884 | orchestrator | 2025-05-04 00:53:29.768893 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-04 00:53:29.768902 | orchestrator | Sunday 04 May 2025 00:52:39 +0000 (0:00:02.804) 0:06:43.432 ************ 2025-05-04 00:53:29.768911 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768920 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768933 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768942 | orchestrator | 2025-05-04 00:53:29.768951 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-04 00:53:29.768960 | orchestrator | Sunday 04 May 2025 00:52:39 +0000 (0:00:00.342) 0:06:43.774 ************ 2025-05-04 00:53:29.768968 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.768977 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.768986 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.768994 | orchestrator | 2025-05-04 00:53:29.769003 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-04 00:53:29.769012 | orchestrator | Sunday 04 May 2025 00:52:40 +0000 (0:00:00.629) 0:06:44.403 ************ 2025-05-04 00:53:29.769020 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769029 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769038 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769047 | orchestrator | 2025-05-04 00:53:29.769055 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-04 00:53:29.769064 | orchestrator | Sunday 04 May 2025 00:52:40 +0000 (0:00:00.604) 0:06:45.008 ************ 2025-05-04 00:53:29.769073 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769081 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769090 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769099 | orchestrator | 2025-05-04 00:53:29.769107 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-04 00:53:29.769116 | orchestrator | Sunday 04 May 2025 00:52:41 +0000 (0:00:00.598) 0:06:45.607 ************ 2025-05-04 00:53:29.769125 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769134 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769142 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769151 | orchestrator | 2025-05-04 00:53:29.769160 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-04 00:53:29.769168 | orchestrator | Sunday 04 May 2025 00:52:41 +0000 (0:00:00.306) 0:06:45.913 ************ 2025-05-04 00:53:29.769177 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769194 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769203 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769222 | orchestrator | 2025-05-04 00:53:29.769231 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-04 00:53:29.769240 | orchestrator | Sunday 04 May 2025 00:52:42 +0000 (0:00:01.104) 0:06:47.018 ************ 2025-05-04 00:53:29.769257 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769266 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769275 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769283 | orchestrator | 2025-05-04 00:53:29.769292 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-04 00:53:29.769301 | orchestrator | Sunday 04 May 2025 00:52:43 +0000 (0:00:00.964) 0:06:47.982 ************ 2025-05-04 00:53:29.769310 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769318 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769327 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769336 | orchestrator | 2025-05-04 00:53:29.769345 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-04 00:53:29.769353 | orchestrator | Sunday 04 May 2025 00:52:44 +0000 (0:00:00.377) 0:06:48.360 ************ 2025-05-04 00:53:29.769362 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769372 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769382 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769390 | orchestrator | 2025-05-04 00:53:29.769399 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-04 00:53:29.769408 | orchestrator | Sunday 04 May 2025 00:52:45 +0000 (0:00:01.446) 0:06:49.807 ************ 2025-05-04 00:53:29.769417 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769426 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769434 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769443 | orchestrator | 2025-05-04 00:53:29.769452 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-04 00:53:29.769461 | orchestrator | Sunday 04 May 2025 00:52:47 +0000 (0:00:01.352) 0:06:51.159 ************ 2025-05-04 00:53:29.769470 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769478 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769487 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769496 | orchestrator | 2025-05-04 00:53:29.769504 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-04 00:53:29.769513 | orchestrator | Sunday 04 May 2025 00:52:48 +0000 (0:00:00.929) 0:06:52.089 ************ 2025-05-04 00:53:29.769522 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.769530 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.769539 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.769548 | orchestrator | 2025-05-04 00:53:29.769556 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-04 00:53:29.769565 | orchestrator | Sunday 04 May 2025 00:52:58 +0000 (0:00:10.632) 0:07:02.721 ************ 2025-05-04 00:53:29.769574 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769583 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769591 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769600 | orchestrator | 2025-05-04 00:53:29.769609 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-04 00:53:29.769617 | orchestrator | Sunday 04 May 2025 00:52:59 +0000 (0:00:01.187) 0:07:03.909 ************ 2025-05-04 00:53:29.769626 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.769635 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.769643 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.769652 | orchestrator | 2025-05-04 00:53:29.769661 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-04 00:53:29.769669 | orchestrator | Sunday 04 May 2025 00:53:08 +0000 (0:00:09.122) 0:07:13.031 ************ 2025-05-04 00:53:29.769678 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.769687 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.769695 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.769704 | orchestrator | 2025-05-04 00:53:29.769720 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-04 00:53:29.769733 | orchestrator | Sunday 04 May 2025 00:53:10 +0000 (0:00:01.778) 0:07:14.809 ************ 2025-05-04 00:53:29.769742 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:53:29.769751 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:53:29.769759 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:53:29.769768 | orchestrator | 2025-05-04 00:53:29.769780 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-04 00:53:29.769790 | orchestrator | Sunday 04 May 2025 00:53:20 +0000 (0:00:09.755) 0:07:24.565 ************ 2025-05-04 00:53:29.769798 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769823 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769832 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769841 | orchestrator | 2025-05-04 00:53:29.769850 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-04 00:53:29.769858 | orchestrator | Sunday 04 May 2025 00:53:21 +0000 (0:00:00.710) 0:07:25.275 ************ 2025-05-04 00:53:29.769867 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769875 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769884 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769892 | orchestrator | 2025-05-04 00:53:29.769901 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-04 00:53:29.769910 | orchestrator | Sunday 04 May 2025 00:53:21 +0000 (0:00:00.701) 0:07:25.976 ************ 2025-05-04 00:53:29.769919 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769927 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769936 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769944 | orchestrator | 2025-05-04 00:53:29.769953 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-04 00:53:29.769962 | orchestrator | Sunday 04 May 2025 00:53:22 +0000 (0:00:00.377) 0:07:26.354 ************ 2025-05-04 00:53:29.769970 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.769979 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.769988 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.769996 | orchestrator | 2025-05-04 00:53:29.770005 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-04 00:53:29.770034 | orchestrator | Sunday 04 May 2025 00:53:23 +0000 (0:00:00.767) 0:07:27.121 ************ 2025-05-04 00:53:29.770045 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.770055 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.770064 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.770072 | orchestrator | 2025-05-04 00:53:29.770081 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-04 00:53:29.770089 | orchestrator | Sunday 04 May 2025 00:53:23 +0000 (0:00:00.754) 0:07:27.876 ************ 2025-05-04 00:53:29.770098 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:53:29.770107 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:53:29.770115 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:53:29.770124 | orchestrator | 2025-05-04 00:53:29.770133 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-04 00:53:29.770141 | orchestrator | Sunday 04 May 2025 00:53:24 +0000 (0:00:00.676) 0:07:28.553 ************ 2025-05-04 00:53:29.770150 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.770159 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.770168 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.770176 | orchestrator | 2025-05-04 00:53:29.770185 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-04 00:53:29.770194 | orchestrator | Sunday 04 May 2025 00:53:25 +0000 (0:00:00.982) 0:07:29.535 ************ 2025-05-04 00:53:29.770203 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:53:29.770211 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:53:29.770220 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:53:29.770228 | orchestrator | 2025-05-04 00:53:29.770237 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:53:29.770251 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-04 00:53:29.770261 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-04 00:53:29.770270 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-04 00:53:29.770279 | orchestrator | 2025-05-04 00:53:29.770287 | orchestrator | 2025-05-04 00:53:29.770296 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:53:29.770305 | orchestrator | Sunday 04 May 2025 00:53:26 +0000 (0:00:01.265) 0:07:30.801 ************ 2025-05-04 00:53:29.770314 | orchestrator | =============================================================================== 2025-05-04 00:53:29.770322 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.63s 2025-05-04 00:53:29.770331 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.76s 2025-05-04 00:53:29.770340 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.12s 2025-05-04 00:53:29.770348 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.73s 2025-05-04 00:53:29.770357 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.44s 2025-05-04 00:53:29.770366 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.01s 2025-05-04 00:53:29.770374 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.83s 2025-05-04 00:53:29.770383 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.72s 2025-05-04 00:53:29.770391 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.62s 2025-05-04 00:53:29.770400 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.62s 2025-05-04 00:53:29.770408 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.53s 2025-05-04 00:53:29.770421 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.39s 2025-05-04 00:53:29.770430 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.38s 2025-05-04 00:53:29.770439 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.16s 2025-05-04 00:53:29.770452 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.03s 2025-05-04 00:53:32.779392 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.68s 2025-05-04 00:53:32.779524 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.59s 2025-05-04 00:53:32.779543 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.58s 2025-05-04 00:53:32.779558 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.41s 2025-05-04 00:53:32.779573 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.37s 2025-05-04 00:53:32.779589 | orchestrator | 2025-05-04 00:53:29 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:32.779604 | orchestrator | 2025-05-04 00:53:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:32.779618 | orchestrator | 2025-05-04 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:32.779650 | orchestrator | 2025-05-04 00:53:32 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:32.780484 | orchestrator | 2025-05-04 00:53:32 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:32.781335 | orchestrator | 2025-05-04 00:53:32 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:32.782335 | orchestrator | 2025-05-04 00:53:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:32.782508 | orchestrator | 2025-05-04 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:35.835364 | orchestrator | 2025-05-04 00:53:35 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:35.842582 | orchestrator | 2025-05-04 00:53:35 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:35.843213 | orchestrator | 2025-05-04 00:53:35 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:35.843250 | orchestrator | 2025-05-04 00:53:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:38.884141 | orchestrator | 2025-05-04 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:38.884290 | orchestrator | 2025-05-04 00:53:38 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:38.884585 | orchestrator | 2025-05-04 00:53:38 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:38.885237 | orchestrator | 2025-05-04 00:53:38 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:38.887868 | orchestrator | 2025-05-04 00:53:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:41.928276 | orchestrator | 2025-05-04 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:41.928409 | orchestrator | 2025-05-04 00:53:41 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:41.928662 | orchestrator | 2025-05-04 00:53:41 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:41.929410 | orchestrator | 2025-05-04 00:53:41 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:41.930074 | orchestrator | 2025-05-04 00:53:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:41.930191 | orchestrator | 2025-05-04 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:44.970577 | orchestrator | 2025-05-04 00:53:44 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:44.971247 | orchestrator | 2025-05-04 00:53:44 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:44.971307 | orchestrator | 2025-05-04 00:53:44 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:44.971657 | orchestrator | 2025-05-04 00:53:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:48.031763 | orchestrator | 2025-05-04 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:48.032022 | orchestrator | 2025-05-04 00:53:48 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:48.033451 | orchestrator | 2025-05-04 00:53:48 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:48.035382 | orchestrator | 2025-05-04 00:53:48 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:48.039118 | orchestrator | 2025-05-04 00:53:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:51.089264 | orchestrator | 2025-05-04 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:51.089416 | orchestrator | 2025-05-04 00:53:51 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:51.089676 | orchestrator | 2025-05-04 00:53:51 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:51.089745 | orchestrator | 2025-05-04 00:53:51 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:51.091001 | orchestrator | 2025-05-04 00:53:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:54.146345 | orchestrator | 2025-05-04 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:54.146526 | orchestrator | 2025-05-04 00:53:54 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:54.147982 | orchestrator | 2025-05-04 00:53:54 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:54.148076 | orchestrator | 2025-05-04 00:53:54 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:54.153013 | orchestrator | 2025-05-04 00:53:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:53:57.184732 | orchestrator | 2025-05-04 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:53:57.184874 | orchestrator | 2025-05-04 00:53:57 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:53:57.185959 | orchestrator | 2025-05-04 00:53:57 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:53:57.187211 | orchestrator | 2025-05-04 00:53:57 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:53:57.188877 | orchestrator | 2025-05-04 00:53:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:00.227338 | orchestrator | 2025-05-04 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:00.227496 | orchestrator | 2025-05-04 00:54:00 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:00.230199 | orchestrator | 2025-05-04 00:54:00 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:00.233616 | orchestrator | 2025-05-04 00:54:00 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:00.234787 | orchestrator | 2025-05-04 00:54:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:03.275174 | orchestrator | 2025-05-04 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:03.275305 | orchestrator | 2025-05-04 00:54:03 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:03.276052 | orchestrator | 2025-05-04 00:54:03 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:03.277375 | orchestrator | 2025-05-04 00:54:03 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:06.333502 | orchestrator | 2025-05-04 00:54:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:06.333632 | orchestrator | 2025-05-04 00:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:06.333686 | orchestrator | 2025-05-04 00:54:06 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:06.334142 | orchestrator | 2025-05-04 00:54:06 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:06.335472 | orchestrator | 2025-05-04 00:54:06 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:06.336778 | orchestrator | 2025-05-04 00:54:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:09.396729 | orchestrator | 2025-05-04 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:09.397147 | orchestrator | 2025-05-04 00:54:09 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:09.397997 | orchestrator | 2025-05-04 00:54:09 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:09.398075 | orchestrator | 2025-05-04 00:54:09 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:09.399041 | orchestrator | 2025-05-04 00:54:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:12.452645 | orchestrator | 2025-05-04 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:12.453039 | orchestrator | 2025-05-04 00:54:12 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:12.456188 | orchestrator | 2025-05-04 00:54:12 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:12.456239 | orchestrator | 2025-05-04 00:54:12 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:12.456948 | orchestrator | 2025-05-04 00:54:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:12.457096 | orchestrator | 2025-05-04 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:15.508079 | orchestrator | 2025-05-04 00:54:15 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:15.509658 | orchestrator | 2025-05-04 00:54:15 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:15.512002 | orchestrator | 2025-05-04 00:54:15 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:15.513233 | orchestrator | 2025-05-04 00:54:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:15.513487 | orchestrator | 2025-05-04 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:18.578322 | orchestrator | 2025-05-04 00:54:18 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:18.580952 | orchestrator | 2025-05-04 00:54:18 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:18.584739 | orchestrator | 2025-05-04 00:54:18 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:21.650122 | orchestrator | 2025-05-04 00:54:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:21.650261 | orchestrator | 2025-05-04 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:21.650299 | orchestrator | 2025-05-04 00:54:21 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:21.652011 | orchestrator | 2025-05-04 00:54:21 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:21.653221 | orchestrator | 2025-05-04 00:54:21 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:21.656326 | orchestrator | 2025-05-04 00:54:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:21.656591 | orchestrator | 2025-05-04 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:24.711900 | orchestrator | 2025-05-04 00:54:24 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:24.713081 | orchestrator | 2025-05-04 00:54:24 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:24.714445 | orchestrator | 2025-05-04 00:54:24 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:24.716969 | orchestrator | 2025-05-04 00:54:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:27.779899 | orchestrator | 2025-05-04 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:27.780050 | orchestrator | 2025-05-04 00:54:27 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:27.780487 | orchestrator | 2025-05-04 00:54:27 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:27.780523 | orchestrator | 2025-05-04 00:54:27 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:27.781648 | orchestrator | 2025-05-04 00:54:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:30.818697 | orchestrator | 2025-05-04 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:30.818917 | orchestrator | 2025-05-04 00:54:30 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:30.827147 | orchestrator | 2025-05-04 00:54:30 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:30.828432 | orchestrator | 2025-05-04 00:54:30 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:30.829855 | orchestrator | 2025-05-04 00:54:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:33.891327 | orchestrator | 2025-05-04 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:33.891478 | orchestrator | 2025-05-04 00:54:33 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:33.892256 | orchestrator | 2025-05-04 00:54:33 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:33.893881 | orchestrator | 2025-05-04 00:54:33 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:33.895132 | orchestrator | 2025-05-04 00:54:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:33.895572 | orchestrator | 2025-05-04 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:36.948290 | orchestrator | 2025-05-04 00:54:36 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:36.949763 | orchestrator | 2025-05-04 00:54:36 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:36.949827 | orchestrator | 2025-05-04 00:54:36 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:36.951696 | orchestrator | 2025-05-04 00:54:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:40.009017 | orchestrator | 2025-05-04 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:40.009159 | orchestrator | 2025-05-04 00:54:40 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:40.010095 | orchestrator | 2025-05-04 00:54:40 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:40.012098 | orchestrator | 2025-05-04 00:54:40 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:40.013689 | orchestrator | 2025-05-04 00:54:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:43.062476 | orchestrator | 2025-05-04 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:43.062632 | orchestrator | 2025-05-04 00:54:43 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:43.064006 | orchestrator | 2025-05-04 00:54:43 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:43.065675 | orchestrator | 2025-05-04 00:54:43 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:43.067288 | orchestrator | 2025-05-04 00:54:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:46.118260 | orchestrator | 2025-05-04 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:46.118461 | orchestrator | 2025-05-04 00:54:46 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:46.119730 | orchestrator | 2025-05-04 00:54:46 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:46.121380 | orchestrator | 2025-05-04 00:54:46 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:46.123538 | orchestrator | 2025-05-04 00:54:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:49.177365 | orchestrator | 2025-05-04 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:49.177520 | orchestrator | 2025-05-04 00:54:49 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:49.177835 | orchestrator | 2025-05-04 00:54:49 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:49.179576 | orchestrator | 2025-05-04 00:54:49 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:49.181174 | orchestrator | 2025-05-04 00:54:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:52.240301 | orchestrator | 2025-05-04 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:52.240452 | orchestrator | 2025-05-04 00:54:52 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:52.240985 | orchestrator | 2025-05-04 00:54:52 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:52.243088 | orchestrator | 2025-05-04 00:54:52 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:52.244704 | orchestrator | 2025-05-04 00:54:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:52.245351 | orchestrator | 2025-05-04 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:55.300454 | orchestrator | 2025-05-04 00:54:55 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:55.301305 | orchestrator | 2025-05-04 00:54:55 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:55.301368 | orchestrator | 2025-05-04 00:54:55 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:55.302365 | orchestrator | 2025-05-04 00:54:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:58.349122 | orchestrator | 2025-05-04 00:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:54:58.349236 | orchestrator | 2025-05-04 00:54:58 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:54:58.349528 | orchestrator | 2025-05-04 00:54:58 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:54:58.350403 | orchestrator | 2025-05-04 00:54:58 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:54:58.351161 | orchestrator | 2025-05-04 00:54:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:54:58.351360 | orchestrator | 2025-05-04 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:01.422466 | orchestrator | 2025-05-04 00:55:01 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:01.423973 | orchestrator | 2025-05-04 00:55:01 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:01.425322 | orchestrator | 2025-05-04 00:55:01 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:01.426977 | orchestrator | 2025-05-04 00:55:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:04.475046 | orchestrator | 2025-05-04 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:04.475195 | orchestrator | 2025-05-04 00:55:04 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:04.475976 | orchestrator | 2025-05-04 00:55:04 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:04.477281 | orchestrator | 2025-05-04 00:55:04 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:04.478608 | orchestrator | 2025-05-04 00:55:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:07.526274 | orchestrator | 2025-05-04 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:07.526432 | orchestrator | 2025-05-04 00:55:07 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:07.527142 | orchestrator | 2025-05-04 00:55:07 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:07.528771 | orchestrator | 2025-05-04 00:55:07 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:07.530239 | orchestrator | 2025-05-04 00:55:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:10.579464 | orchestrator | 2025-05-04 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:10.579613 | orchestrator | 2025-05-04 00:55:10 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:10.580252 | orchestrator | 2025-05-04 00:55:10 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:10.582945 | orchestrator | 2025-05-04 00:55:10 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:10.584550 | orchestrator | 2025-05-04 00:55:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:13.638832 | orchestrator | 2025-05-04 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:13.639101 | orchestrator | 2025-05-04 00:55:13 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:13.640058 | orchestrator | 2025-05-04 00:55:13 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:13.642167 | orchestrator | 2025-05-04 00:55:13 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:13.643289 | orchestrator | 2025-05-04 00:55:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:16.702621 | orchestrator | 2025-05-04 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:16.702744 | orchestrator | 2025-05-04 00:55:16 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:16.703545 | orchestrator | 2025-05-04 00:55:16 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:16.705240 | orchestrator | 2025-05-04 00:55:16 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:16.706236 | orchestrator | 2025-05-04 00:55:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:19.762238 | orchestrator | 2025-05-04 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:19.762498 | orchestrator | 2025-05-04 00:55:19 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:19.763331 | orchestrator | 2025-05-04 00:55:19 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:19.763369 | orchestrator | 2025-05-04 00:55:19 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:19.765689 | orchestrator | 2025-05-04 00:55:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:19.768281 | orchestrator | 2025-05-04 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:22.802918 | orchestrator | 2025-05-04 00:55:22 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:22.803304 | orchestrator | 2025-05-04 00:55:22 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:22.804234 | orchestrator | 2025-05-04 00:55:22 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:22.805931 | orchestrator | 2025-05-04 00:55:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:25.855249 | orchestrator | 2025-05-04 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:25.855406 | orchestrator | 2025-05-04 00:55:25 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:25.856824 | orchestrator | 2025-05-04 00:55:25 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:25.858227 | orchestrator | 2025-05-04 00:55:25 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:25.859579 | orchestrator | 2025-05-04 00:55:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:28.912079 | orchestrator | 2025-05-04 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:28.912253 | orchestrator | 2025-05-04 00:55:28 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:28.913145 | orchestrator | 2025-05-04 00:55:28 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:28.914881 | orchestrator | 2025-05-04 00:55:28 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:28.915993 | orchestrator | 2025-05-04 00:55:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:31.965970 | orchestrator | 2025-05-04 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:31.966194 | orchestrator | 2025-05-04 00:55:31 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:31.968312 | orchestrator | 2025-05-04 00:55:31 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:31.970402 | orchestrator | 2025-05-04 00:55:31 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:31.972311 | orchestrator | 2025-05-04 00:55:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:35.025969 | orchestrator | 2025-05-04 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:35.026180 | orchestrator | 2025-05-04 00:55:35 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:35.028498 | orchestrator | 2025-05-04 00:55:35 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:35.030492 | orchestrator | 2025-05-04 00:55:35 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:35.032328 | orchestrator | 2025-05-04 00:55:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:35.033665 | orchestrator | 2025-05-04 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:38.076448 | orchestrator | 2025-05-04 00:55:38 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:38.079371 | orchestrator | 2025-05-04 00:55:38 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:38.081289 | orchestrator | 2025-05-04 00:55:38 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state STARTED 2025-05-04 00:55:38.081324 | orchestrator | 2025-05-04 00:55:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:41.131342 | orchestrator | 2025-05-04 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:41.131488 | orchestrator | 2025-05-04 00:55:41 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:41.133308 | orchestrator | 2025-05-04 00:55:41 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:41.134132 | orchestrator | 2025-05-04 00:55:41 | INFO  | Task 0ed73cb7-b207-454c-8e1f-4c18b08bb53d is in state SUCCESS 2025-05-04 00:55:41.136201 | orchestrator | 2025-05-04 00:55:41.136243 | orchestrator | 2025-05-04 00:55:41.136258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:55:41.136273 | orchestrator | 2025-05-04 00:55:41.136288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:55:41.136302 | orchestrator | Sunday 04 May 2025 00:53:30 +0000 (0:00:00.337) 0:00:00.337 ************ 2025-05-04 00:55:41.136318 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:55:41.136333 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:55:41.136347 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:55:41.136383 | orchestrator | 2025-05-04 00:55:41.136397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:55:41.136412 | orchestrator | Sunday 04 May 2025 00:53:31 +0000 (0:00:00.432) 0:00:00.770 ************ 2025-05-04 00:55:41.136427 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-04 00:55:41.136442 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-04 00:55:41.136456 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-04 00:55:41.136470 | orchestrator | 2025-05-04 00:55:41.136484 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-04 00:55:41.136498 | orchestrator | 2025-05-04 00:55:41.136512 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-04 00:55:41.136527 | orchestrator | Sunday 04 May 2025 00:53:31 +0000 (0:00:00.295) 0:00:01.065 ************ 2025-05-04 00:55:41.136541 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:55:41.136555 | orchestrator | 2025-05-04 00:55:41.136569 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-04 00:55:41.136583 | orchestrator | Sunday 04 May 2025 00:53:32 +0000 (0:00:00.752) 0:00:01.817 ************ 2025-05-04 00:55:41.136597 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-04 00:55:41.136611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-04 00:55:41.136625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-04 00:55:41.136639 | orchestrator | 2025-05-04 00:55:41.136653 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-04 00:55:41.136667 | orchestrator | Sunday 04 May 2025 00:53:33 +0000 (0:00:00.766) 0:00:02.584 ************ 2025-05-04 00:55:41.136711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.136730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.136757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.136799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.136820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.136847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.136864 | orchestrator | 2025-05-04 00:55:41.136880 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-04 00:55:41.136896 | orchestrator | Sunday 04 May 2025 00:53:34 +0000 (0:00:01.710) 0:00:04.295 ************ 2025-05-04 00:55:41.136912 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:55:41.136928 | orchestrator | 2025-05-04 00:55:41.136944 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-04 00:55:41.136960 | orchestrator | Sunday 04 May 2025 00:53:35 +0000 (0:00:00.860) 0:00:05.155 ************ 2025-05-04 00:55:41.136988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.137070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.137088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.137111 | orchestrator | 2025-05-04 00:55:41.137127 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-04 00:55:41.137144 | orchestrator | Sunday 04 May 2025 00:53:39 +0000 (0:00:03.616) 0:00:08.772 ************ 2025-05-04 00:55:41.137160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:55:41.137178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:55:41.137193 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:55:41.137216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:55:41.137232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:55:41.137254 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:55:41.137270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:55:41.137285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:55:41.137300 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:55:41.137314 | orchestrator | 2025-05-04 00:55:41.137328 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-04 00:55:41.137348 | orchestrator | Sunday 04 May 2025 00:53:40 +0000 (0:00:01.181) 0:00:09.954 ************ 2025-05-04 00:55:41.137369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:55:41.137385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:55:41.137408 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:55:41.137423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:55:41.137438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:55:41.137453 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:55:41.137474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-04 00:55:41.137490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-04 00:55:41.137512 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:55:41.137526 | orchestrator | 2025-05-04 00:55:41.137541 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-04 00:55:41.137555 | orchestrator | Sunday 04 May 2025 00:53:41 +0000 (0:00:00.907) 0:00:10.861 ************ 2025-05-04 00:55:41.137569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.137646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.137661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.137676 | orchestrator | 2025-05-04 00:55:41.137690 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-04 00:55:41.137705 | orchestrator | Sunday 04 May 2025 00:53:43 +0000 (0:00:02.378) 0:00:13.240 ************ 2025-05-04 00:55:41.137719 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:55:41.137733 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:55:41.137748 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:55:41.137762 | orchestrator | 2025-05-04 00:55:41.137799 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-04 00:55:41.137825 | orchestrator | Sunday 04 May 2025 00:53:46 +0000 (0:00:03.099) 0:00:16.339 ************ 2025-05-04 00:55:41.137843 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:55:41.137857 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:55:41.137872 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:55:41.137886 | orchestrator | 2025-05-04 00:55:41.137899 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-04 00:55:41.137914 | orchestrator | Sunday 04 May 2025 00:53:48 +0000 (0:00:01.823) 0:00:18.163 ************ 2025-05-04 00:55:41.137937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-04 00:55:41.137992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.138013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.138085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-04 00:55:41.138101 | orchestrator | 2025-05-04 00:55:41.138116 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-04 00:55:41.138130 | orchestrator | Sunday 04 May 2025 00:53:51 +0000 (0:00:02.486) 0:00:20.649 ************ 2025-05-04 00:55:41.138144 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:55:41.138159 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:55:41.138173 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:55:41.138187 | orchestrator | 2025-05-04 00:55:41.138202 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-04 00:55:41.138216 | orchestrator | Sunday 04 May 2025 00:53:51 +0000 (0:00:00.595) 0:00:21.245 ************ 2025-05-04 00:55:41.138230 | orchestrator | 2025-05-04 00:55:41.138244 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-04 00:55:41.138259 | orchestrator | Sunday 04 May 2025 00:53:52 +0000 (0:00:00.484) 0:00:21.729 ************ 2025-05-04 00:55:41.138273 | orchestrator | 2025-05-04 00:55:41.138287 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-04 00:55:41.138301 | orchestrator | Sunday 04 May 2025 00:53:52 +0000 (0:00:00.138) 0:00:21.868 ************ 2025-05-04 00:55:41.138314 | orchestrator | 2025-05-04 00:55:41.138328 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-04 00:55:41.138342 | orchestrator | Sunday 04 May 2025 00:53:52 +0000 (0:00:00.169) 0:00:22.037 ************ 2025-05-04 00:55:41.138356 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:55:41.138370 | orchestrator | 2025-05-04 00:55:41.138384 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-04 00:55:41.138398 | orchestrator | Sunday 04 May 2025 00:53:53 +0000 (0:00:00.501) 0:00:22.539 ************ 2025-05-04 00:55:41.138412 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:55:41.138427 | orchestrator | 2025-05-04 00:55:41.138441 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-04 00:55:41.138455 | orchestrator | Sunday 04 May 2025 00:53:54 +0000 (0:00:01.301) 0:00:23.840 ************ 2025-05-04 00:55:41.138469 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:55:41.138482 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:55:41.138496 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:55:41.138510 | orchestrator | 2025-05-04 00:55:41.138524 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-04 00:55:41.138545 | orchestrator | Sunday 04 May 2025 00:54:27 +0000 (0:00:33.075) 0:00:56.915 ************ 2025-05-04 00:55:41.138559 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:55:41.138573 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:55:41.138587 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:55:41.138601 | orchestrator | 2025-05-04 00:55:41.138615 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-04 00:55:41.138629 | orchestrator | Sunday 04 May 2025 00:55:26 +0000 (0:00:58.918) 0:01:55.834 ************ 2025-05-04 00:55:41.138643 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:55:41.138658 | orchestrator | 2025-05-04 00:55:41.138672 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-04 00:55:41.138686 | orchestrator | Sunday 04 May 2025 00:55:27 +0000 (0:00:00.775) 0:01:56.610 ************ 2025-05-04 00:55:41.138700 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:55:41.138714 | orchestrator | 2025-05-04 00:55:41.138728 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-04 00:55:41.138742 | orchestrator | Sunday 04 May 2025 00:55:29 +0000 (0:00:02.711) 0:01:59.321 ************ 2025-05-04 00:55:41.138756 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:55:41.138770 | orchestrator | 2025-05-04 00:55:41.138822 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-04 00:55:41.138843 | orchestrator | Sunday 04 May 2025 00:55:32 +0000 (0:00:02.435) 0:02:01.756 ************ 2025-05-04 00:55:41.138858 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:55:41.138872 | orchestrator | 2025-05-04 00:55:41.138886 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-04 00:55:41.138901 | orchestrator | Sunday 04 May 2025 00:55:35 +0000 (0:00:02.878) 0:02:04.635 ************ 2025-05-04 00:55:41.138915 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:55:41.138929 | orchestrator | 2025-05-04 00:55:41.138950 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:55:44.187134 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 00:55:44.187273 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:55:44.187295 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-04 00:55:44.187310 | orchestrator | 2025-05-04 00:55:44.187324 | orchestrator | 2025-05-04 00:55:44.187339 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:55:44.187355 | orchestrator | Sunday 04 May 2025 00:55:38 +0000 (0:00:02.873) 0:02:07.509 ************ 2025-05-04 00:55:44.187369 | orchestrator | =============================================================================== 2025-05-04 00:55:44.187383 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 58.92s 2025-05-04 00:55:44.187397 | orchestrator | opensearch : Restart opensearch container ------------------------------ 33.08s 2025-05-04 00:55:44.187411 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.62s 2025-05-04 00:55:44.187426 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.10s 2025-05-04 00:55:44.187439 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.88s 2025-05-04 00:55:44.187454 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.87s 2025-05-04 00:55:44.187469 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.71s 2025-05-04 00:55:44.187483 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.49s 2025-05-04 00:55:44.187498 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.44s 2025-05-04 00:55:44.187551 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2025-05-04 00:55:44.187567 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.82s 2025-05-04 00:55:44.187581 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-05-04 00:55:44.187595 | orchestrator | opensearch : Perform a flush -------------------------------------------- 1.30s 2025-05-04 00:55:44.187609 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.18s 2025-05-04 00:55:44.187624 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.91s 2025-05-04 00:55:44.187638 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.86s 2025-05-04 00:55:44.187654 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.79s 2025-05-04 00:55:44.187669 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2025-05-04 00:55:44.187685 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.77s 2025-05-04 00:55:44.187700 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-05-04 00:55:44.187716 | orchestrator | 2025-05-04 00:55:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:44.187732 | orchestrator | 2025-05-04 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:44.187767 | orchestrator | 2025-05-04 00:55:44 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:44.188177 | orchestrator | 2025-05-04 00:55:44 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:44.189414 | orchestrator | 2025-05-04 00:55:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:47.235579 | orchestrator | 2025-05-04 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:47.235700 | orchestrator | 2025-05-04 00:55:47 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:47.239644 | orchestrator | 2025-05-04 00:55:47 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:50.293291 | orchestrator | 2025-05-04 00:55:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:50.293455 | orchestrator | 2025-05-04 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:50.293495 | orchestrator | 2025-05-04 00:55:50 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:50.295960 | orchestrator | 2025-05-04 00:55:50 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:50.297747 | orchestrator | 2025-05-04 00:55:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:53.353291 | orchestrator | 2025-05-04 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:53.353443 | orchestrator | 2025-05-04 00:55:53 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:53.359264 | orchestrator | 2025-05-04 00:55:53 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:53.364661 | orchestrator | 2025-05-04 00:55:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:56.419347 | orchestrator | 2025-05-04 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:56.419505 | orchestrator | 2025-05-04 00:55:56 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:56.421483 | orchestrator | 2025-05-04 00:55:56 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:56.423158 | orchestrator | 2025-05-04 00:55:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:59.474167 | orchestrator | 2025-05-04 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:55:59.474282 | orchestrator | 2025-05-04 00:55:59 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:55:59.475513 | orchestrator | 2025-05-04 00:55:59 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:55:59.477169 | orchestrator | 2025-05-04 00:55:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:55:59.478004 | orchestrator | 2025-05-04 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:02.532015 | orchestrator | 2025-05-04 00:56:02 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:02.533134 | orchestrator | 2025-05-04 00:56:02 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:02.534706 | orchestrator | 2025-05-04 00:56:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:02.534847 | orchestrator | 2025-05-04 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:05.594133 | orchestrator | 2025-05-04 00:56:05 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:05.596904 | orchestrator | 2025-05-04 00:56:05 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:05.599991 | orchestrator | 2025-05-04 00:56:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:05.600194 | orchestrator | 2025-05-04 00:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:08.650462 | orchestrator | 2025-05-04 00:56:08 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:08.652394 | orchestrator | 2025-05-04 00:56:08 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:08.654169 | orchestrator | 2025-05-04 00:56:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:11.711251 | orchestrator | 2025-05-04 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:11.711404 | orchestrator | 2025-05-04 00:56:11 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:11.714473 | orchestrator | 2025-05-04 00:56:11 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:11.715873 | orchestrator | 2025-05-04 00:56:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:14.775401 | orchestrator | 2025-05-04 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:14.775554 | orchestrator | 2025-05-04 00:56:14 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:14.776944 | orchestrator | 2025-05-04 00:56:14 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:14.778223 | orchestrator | 2025-05-04 00:56:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:17.838989 | orchestrator | 2025-05-04 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:17.839130 | orchestrator | 2025-05-04 00:56:17 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:17.841134 | orchestrator | 2025-05-04 00:56:17 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:17.843207 | orchestrator | 2025-05-04 00:56:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:20.894878 | orchestrator | 2025-05-04 00:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:20.896852 | orchestrator | 2025-05-04 00:56:20 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:20.900571 | orchestrator | 2025-05-04 00:56:20 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:20.900634 | orchestrator | 2025-05-04 00:56:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:23.959182 | orchestrator | 2025-05-04 00:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:23.959283 | orchestrator | 2025-05-04 00:56:23 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:23.961411 | orchestrator | 2025-05-04 00:56:23 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:23.965595 | orchestrator | 2025-05-04 00:56:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:23.966398 | orchestrator | 2025-05-04 00:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:27.019394 | orchestrator | 2025-05-04 00:56:27 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:27.020308 | orchestrator | 2025-05-04 00:56:27 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:27.020349 | orchestrator | 2025-05-04 00:56:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:30.068129 | orchestrator | 2025-05-04 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:30.068287 | orchestrator | 2025-05-04 00:56:30 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:30.068610 | orchestrator | 2025-05-04 00:56:30 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:30.070281 | orchestrator | 2025-05-04 00:56:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:33.120273 | orchestrator | 2025-05-04 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:33.120401 | orchestrator | 2025-05-04 00:56:33 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:33.121657 | orchestrator | 2025-05-04 00:56:33 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:33.123963 | orchestrator | 2025-05-04 00:56:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:33.124311 | orchestrator | 2025-05-04 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:36.178132 | orchestrator | 2025-05-04 00:56:36 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state STARTED 2025-05-04 00:56:36.179457 | orchestrator | 2025-05-04 00:56:36 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:36.180389 | orchestrator | 2025-05-04 00:56:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:36.180564 | orchestrator | 2025-05-04 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:39.233238 | orchestrator | 2025-05-04 00:56:39 | INFO  | Task c30e8b0a-5c25-4785-ae16-c49ec653aaae is in state SUCCESS 2025-05-04 00:56:39.234523 | orchestrator | 2025-05-04 00:56:39.235279 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-04 00:56:39.235414 | orchestrator | 2025-05-04 00:56:39.235435 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-04 00:56:39.235450 | orchestrator | 2025-05-04 00:56:39.235465 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-04 00:56:39.235506 | orchestrator | Sunday 04 May 2025 00:43:29 +0000 (0:00:01.660) 0:00:01.660 ************ 2025-05-04 00:56:39.235523 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.235539 | orchestrator | 2025-05-04 00:56:39.235553 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-04 00:56:39.235583 | orchestrator | Sunday 04 May 2025 00:43:30 +0000 (0:00:01.081) 0:00:02.741 ************ 2025-05-04 00:56:39.235598 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.235614 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-04 00:56:39.235628 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-04 00:56:39.235642 | orchestrator | 2025-05-04 00:56:39.235656 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-04 00:56:39.235670 | orchestrator | Sunday 04 May 2025 00:43:31 +0000 (0:00:00.504) 0:00:03.246 ************ 2025-05-04 00:56:39.235685 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.237203 | orchestrator | 2025-05-04 00:56:39.237238 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-04 00:56:39.237255 | orchestrator | Sunday 04 May 2025 00:43:32 +0000 (0:00:01.273) 0:00:04.520 ************ 2025-05-04 00:56:39.237271 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.237287 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.237302 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.237317 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.237332 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.237348 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.237363 | orchestrator | 2025-05-04 00:56:39.237379 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-04 00:56:39.237395 | orchestrator | Sunday 04 May 2025 00:43:33 +0000 (0:00:01.370) 0:00:05.890 ************ 2025-05-04 00:56:39.237410 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.237425 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.237440 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.237455 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.237471 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.237486 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.237501 | orchestrator | 2025-05-04 00:56:39.237516 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-04 00:56:39.237532 | orchestrator | Sunday 04 May 2025 00:43:34 +0000 (0:00:00.925) 0:00:06.816 ************ 2025-05-04 00:56:39.237547 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.237562 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.237578 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.237593 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.237608 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.237623 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.237638 | orchestrator | 2025-05-04 00:56:39.237654 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-04 00:56:39.237725 | orchestrator | Sunday 04 May 2025 00:43:35 +0000 (0:00:01.033) 0:00:07.850 ************ 2025-05-04 00:56:39.237744 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.237761 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.237830 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.237847 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.237863 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.237879 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.237902 | orchestrator | 2025-05-04 00:56:39.237916 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-04 00:56:39.237930 | orchestrator | Sunday 04 May 2025 00:43:36 +0000 (0:00:01.064) 0:00:08.914 ************ 2025-05-04 00:56:39.237944 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.237972 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.237987 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.238001 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.238015 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.238089 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.238104 | orchestrator | 2025-05-04 00:56:39.238117 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-04 00:56:39.238130 | orchestrator | Sunday 04 May 2025 00:43:37 +0000 (0:00:00.980) 0:00:09.895 ************ 2025-05-04 00:56:39.238142 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.238155 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.238167 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.238180 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.238192 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.238225 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.238239 | orchestrator | 2025-05-04 00:56:39.238251 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-04 00:56:39.238264 | orchestrator | Sunday 04 May 2025 00:43:38 +0000 (0:00:00.949) 0:00:10.844 ************ 2025-05-04 00:56:39.238313 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.238329 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.238394 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.238408 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.238421 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.238494 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.238510 | orchestrator | 2025-05-04 00:56:39.238524 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-04 00:56:39.238536 | orchestrator | Sunday 04 May 2025 00:43:39 +0000 (0:00:00.673) 0:00:11.518 ************ 2025-05-04 00:56:39.238549 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.238561 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.238573 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.238586 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.238598 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.238611 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.238623 | orchestrator | 2025-05-04 00:56:39.238649 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-04 00:56:39.238662 | orchestrator | Sunday 04 May 2025 00:43:40 +0000 (0:00:01.105) 0:00:12.623 ************ 2025-05-04 00:56:39.238725 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.238742 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:56:39.238754 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:56:39.238783 | orchestrator | 2025-05-04 00:56:39.238797 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-04 00:56:39.238810 | orchestrator | Sunday 04 May 2025 00:43:41 +0000 (0:00:00.775) 0:00:13.398 ************ 2025-05-04 00:56:39.238823 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.238836 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.238849 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.238861 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.238874 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.238886 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.238899 | orchestrator | 2025-05-04 00:56:39.238911 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-04 00:56:39.238924 | orchestrator | Sunday 04 May 2025 00:43:43 +0000 (0:00:01.873) 0:00:15.272 ************ 2025-05-04 00:56:39.238937 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.238950 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:56:39.238962 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:56:39.239000 | orchestrator | 2025-05-04 00:56:39.239013 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-04 00:56:39.239088 | orchestrator | Sunday 04 May 2025 00:43:45 +0000 (0:00:02.724) 0:00:17.997 ************ 2025-05-04 00:56:39.239103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.239116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.239128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.239141 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.239154 | orchestrator | 2025-05-04 00:56:39.239166 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-04 00:56:39.239186 | orchestrator | Sunday 04 May 2025 00:43:46 +0000 (0:00:00.506) 0:00:18.503 ************ 2025-05-04 00:56:39.239200 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239214 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239227 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239240 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.239253 | orchestrator | 2025-05-04 00:56:39.239266 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-04 00:56:39.239278 | orchestrator | Sunday 04 May 2025 00:43:47 +0000 (0:00:00.921) 0:00:19.425 ************ 2025-05-04 00:56:39.239292 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239319 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239332 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.239344 | orchestrator | 2025-05-04 00:56:39.239357 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-04 00:56:39.239377 | orchestrator | Sunday 04 May 2025 00:43:47 +0000 (0:00:00.253) 0:00:19.679 ************ 2025-05-04 00:56:39.239415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-04 00:43:43.809894', 'end': '2025-05-04 00:43:44.066779', 'delta': '0:00:00.256885', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239439 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-04 00:43:44.564123', 'end': '2025-05-04 00:43:44.834312', 'delta': '0:00:00.270189', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239476 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-04 00:43:45.390018', 'end': '2025-05-04 00:43:45.646908', 'delta': '0:00:00.256890', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-04 00:56:39.239491 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.239504 | orchestrator | 2025-05-04 00:56:39.239517 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-04 00:56:39.239530 | orchestrator | Sunday 04 May 2025 00:43:47 +0000 (0:00:00.311) 0:00:19.990 ************ 2025-05-04 00:56:39.239543 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.239556 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.239568 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.239581 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.239594 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.239606 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.239618 | orchestrator | 2025-05-04 00:56:39.239631 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-04 00:56:39.239644 | orchestrator | Sunday 04 May 2025 00:43:49 +0000 (0:00:01.346) 0:00:21.336 ************ 2025-05-04 00:56:39.239656 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.239761 | orchestrator | 2025-05-04 00:56:39.239829 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-04 00:56:39.239843 | orchestrator | Sunday 04 May 2025 00:43:49 +0000 (0:00:00.651) 0:00:21.988 ************ 2025-05-04 00:56:39.239856 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.239868 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.239881 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.239894 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.239906 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.239919 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.239931 | orchestrator | 2025-05-04 00:56:39.239944 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-04 00:56:39.239957 | orchestrator | Sunday 04 May 2025 00:43:50 +0000 (0:00:00.799) 0:00:22.788 ************ 2025-05-04 00:56:39.239970 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.239982 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.239995 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240007 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240025 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240047 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240060 | orchestrator | 2025-05-04 00:56:39.240073 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-04 00:56:39.240093 | orchestrator | Sunday 04 May 2025 00:43:51 +0000 (0:00:01.355) 0:00:24.143 ************ 2025-05-04 00:56:39.240106 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240119 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240131 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240144 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240156 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240168 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240180 | orchestrator | 2025-05-04 00:56:39.240193 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-04 00:56:39.240206 | orchestrator | Sunday 04 May 2025 00:43:52 +0000 (0:00:00.903) 0:00:25.047 ************ 2025-05-04 00:56:39.240224 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240235 | orchestrator | 2025-05-04 00:56:39.240245 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-04 00:56:39.240255 | orchestrator | Sunday 04 May 2025 00:43:53 +0000 (0:00:00.523) 0:00:25.571 ************ 2025-05-04 00:56:39.240266 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240276 | orchestrator | 2025-05-04 00:56:39.240286 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-04 00:56:39.240297 | orchestrator | Sunday 04 May 2025 00:43:53 +0000 (0:00:00.339) 0:00:25.911 ************ 2025-05-04 00:56:39.240307 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240317 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240327 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240338 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240348 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240359 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240369 | orchestrator | 2025-05-04 00:56:39.240379 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-04 00:56:39.240389 | orchestrator | Sunday 04 May 2025 00:43:54 +0000 (0:00:00.787) 0:00:26.698 ************ 2025-05-04 00:56:39.240400 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240410 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240420 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240430 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240441 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240451 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240461 | orchestrator | 2025-05-04 00:56:39.240471 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-04 00:56:39.240482 | orchestrator | Sunday 04 May 2025 00:43:55 +0000 (0:00:01.020) 0:00:27.718 ************ 2025-05-04 00:56:39.240492 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240502 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240513 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240523 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240533 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240544 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240554 | orchestrator | 2025-05-04 00:56:39.240564 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-04 00:56:39.240575 | orchestrator | Sunday 04 May 2025 00:43:56 +0000 (0:00:00.968) 0:00:28.687 ************ 2025-05-04 00:56:39.240585 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240596 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240606 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240616 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240627 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240637 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240647 | orchestrator | 2025-05-04 00:56:39.240657 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-04 00:56:39.240668 | orchestrator | Sunday 04 May 2025 00:43:57 +0000 (0:00:01.096) 0:00:29.783 ************ 2025-05-04 00:56:39.240678 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240693 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240703 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240713 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240723 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240733 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240744 | orchestrator | 2025-05-04 00:56:39.240754 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-04 00:56:39.240778 | orchestrator | Sunday 04 May 2025 00:43:58 +0000 (0:00:01.276) 0:00:31.059 ************ 2025-05-04 00:56:39.240789 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240799 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240810 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240820 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240830 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240840 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240851 | orchestrator | 2025-05-04 00:56:39.240865 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-04 00:56:39.240875 | orchestrator | Sunday 04 May 2025 00:44:00 +0000 (0:00:01.219) 0:00:32.279 ************ 2025-05-04 00:56:39.240886 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.240896 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.240906 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.240916 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.240926 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.240937 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.240953 | orchestrator | 2025-05-04 00:56:39.240963 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-04 00:56:39.240974 | orchestrator | Sunday 04 May 2025 00:44:00 +0000 (0:00:00.814) 0:00:33.093 ************ 2025-05-04 00:56:39.240985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.240996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part1', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part14', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part15', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part16', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4ddea5b-b8af-4ee0-9445-5b6c1bebc06b', 'scsi-SQEMU_QEMU_HARDDISK_f4ddea5b-b8af-4ee0-9445-5b6c1bebc06b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44aea083-53c7-4db3-b476-f0e15c33499e', 'scsi-SQEMU_QEMU_HARDDISK_44aea083-53c7-4db3-b476-f0e15c33499e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f40ab83-2cd9-4bf4-a5ce-fe50f63fc73a', 'scsi-SQEMU_QEMU_HARDDISK_9f40ab83-2cd9-4bf4-a5ce-fe50f63fc73a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c', 'scsi-SQEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part1', 'scsi-SQEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part14', 'scsi-SQEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part15', 'scsi-SQEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part16', 'scsi-SQEMU_QEMU_HARDDISK_13bb42a8-f5c9-4e2d-b57e-2b129d56f15c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6952e91-4add-41f4-9682-2820842eaefb', 'scsi-SQEMU_QEMU_HARDDISK_e6952e91-4add-41f4-9682-2820842eaefb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843cf234-6aef-404a-a841-1f1650f95beb', 'scsi-SQEMU_QEMU_HARDDISK_843cf234-6aef-404a-a841-1f1650f95beb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a335202a-bc46-4a1a-9390-24712f04f8da', 'scsi-SQEMU_QEMU_HARDDISK_a335202a-bc46-4a1a-9390-24712f04f8da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241354 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.241365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09', 'scsi-SQEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part1', 'scsi-SQEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part14', 'scsi-SQEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part15', 'scsi-SQEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part16', 'scsi-SQEMU_QEMU_HARDDISK_04c131bc-fe2e-4a5c-b435-65085a31af09-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_228c4a8e-d362-4d42-8ea3-c65a43234221', 'scsi-SQEMU_QEMU_HARDDISK_228c4a8e-d362-4d42-8ea3-c65a43234221'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12665b64-aca9-4755-9dee-a26132b82b0a', 'scsi-SQEMU_QEMU_HARDDISK_12665b64-aca9-4755-9dee-a26132b82b0a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_887667df-8a23-4f97-9ff0-05cbc5f29729', 'scsi-SQEMU_QEMU_HARDDISK_887667df-8a23-4f97-9ff0-05cbc5f29729'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241534 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.241545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c91b3cb6--7edb--5452--ada6--d38ce882942b-osd--block--c91b3cb6--7edb--5452--ada6--d38ce882942b', 'dm-uuid-LVM-71pBT1pjpRJYqyJHxHhbblalssmM2V04sOpQmwNgI8Lt2BvclRbx5w9p6VWnirL1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdbd5a24--b46a--5ddb--91ef--7688b352f27d-osd--block--bdbd5a24--b46a--5ddb--91ef--7688b352f27d', 'dm-uuid-LVM-63jZAGyNOSPWdLsrlUqR77pKYvgdH1swvi4BtdgMuM7w9hwSDZSx73SyxoZdyEWt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241650 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.241669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03a186d7--e7a2--5e82--b5c3--d5631de29e6f-osd--block--03a186d7--e7a2--5e82--b5c3--d5631de29e6f', 'dm-uuid-LVM-H898Fax1Eiy3jXiJQBv4rJ9xtSDMmGSGpTY9UgsuLbXoAHWnC10rGrmezMIppAsI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c91b3cb6--7edb--5452--ada6--d38ce882942b-osd--block--c91b3cb6--7edb--5452--ada6--d38ce882942b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BiBYTr-IYLc-SCFQ-Z6RX-MbnW-2cU4-9GxVRN', 'scsi-0QEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7', 'scsi-SQEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5e087d3a--1c7d--5e62--b576--6c121f884fde-osd--block--5e087d3a--1c7d--5e62--b576--6c121f884fde', 'dm-uuid-LVM-1zdNh4CjG3AdEpVaRpghSqVud1VNP1C4IptBo6f0ecsGam9mjnq3e1LApYI4h9nG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bdbd5a24--b46a--5ddb--91ef--7688b352f27d-osd--block--bdbd5a24--b46a--5ddb--91ef--7688b352f27d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zV9mSs-EDEo-rgO4-kOfM-VSZk-7TEp-asrc0o', 'scsi-0QEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93', 'scsi-SQEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc', 'scsi-SQEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.241936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241947 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.241957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.241994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--03a186d7--e7a2--5e82--b5c3--d5631de29e6f-osd--block--03a186d7--e7a2--5e82--b5c3--d5631de29e6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2PuX0W-zM0h-IVnU-IADS-EXvd-Uvmr-f02hhG', 'scsi-0QEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef', 'scsi-SQEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5e087d3a--1c7d--5e62--b576--6c121f884fde-osd--block--5e087d3a--1c7d--5e62--b576--6c121f884fde'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2JkK3E-6SRE-OITx-acql-vGfX-hhz1-GUpcK7', 'scsi-0QEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254', 'scsi-SQEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f', 'scsi-SQEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242123 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.242133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98453abf--c748--514f--aec7--544322a7c940-osd--block--98453abf--c748--514f--aec7--544322a7c940', 'dm-uuid-LVM-7XxLxh6qGXWFUIar0pkE6d5efe3ZgUEsKv0g1Pt2G6w5HENw6FKkde3bcDPpeSXa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f54bf35c--9381--504c--8591--afe4d3e61469-osd--block--f54bf35c--9381--504c--8591--afe4d3e61469', 'dm-uuid-LVM-u7IJocf2PGV2a4kCLt3D5MBNkxQ6mVWJXRJqebLygXn6LkGRpzxE39be6s4PnQoh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:56:39.242255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part1', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part14', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part15', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part16', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98453abf--c748--514f--aec7--544322a7c940-osd--block--98453abf--c748--514f--aec7--544322a7c940'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qZ2Fis-kumD-iBYV-dEZI-JiTk-Stdf-rfQvuQ', 'scsi-0QEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d', 'scsi-SQEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f54bf35c--9381--504c--8591--afe4d3e61469-osd--block--f54bf35c--9381--504c--8591--afe4d3e61469'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EhrDgY-tX6U-FBj6-Aknv-1OX0-9TAa-8r8edJ', 'scsi-0QEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783', 'scsi-SQEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123', 'scsi-SQEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:56:39.242720 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.242731 | orchestrator | 2025-05-04 00:56:39.242741 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-04 00:56:39.242752 | orchestrator | Sunday 04 May 2025 00:44:02 +0000 (0:00:02.011) 0:00:35.105 ************ 2025-05-04 00:56:39.242777 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.242789 | orchestrator | 2025-05-04 00:56:39.242800 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-04 00:56:39.242810 | orchestrator | Sunday 04 May 2025 00:44:03 +0000 (0:00:00.335) 0:00:35.440 ************ 2025-05-04 00:56:39.242826 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.242836 | orchestrator | 2025-05-04 00:56:39.242847 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-04 00:56:39.242857 | orchestrator | Sunday 04 May 2025 00:44:03 +0000 (0:00:00.167) 0:00:35.608 ************ 2025-05-04 00:56:39.242867 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.242877 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.242887 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.242898 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.242908 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.242918 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.242929 | orchestrator | 2025-05-04 00:56:39.242939 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-04 00:56:39.242949 | orchestrator | Sunday 04 May 2025 00:44:04 +0000 (0:00:00.870) 0:00:36.478 ************ 2025-05-04 00:56:39.242960 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.242970 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.242980 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.242991 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.243001 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.243011 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.243021 | orchestrator | 2025-05-04 00:56:39.243032 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-04 00:56:39.243042 | orchestrator | Sunday 04 May 2025 00:44:06 +0000 (0:00:01.775) 0:00:38.254 ************ 2025-05-04 00:56:39.243052 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.243063 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.243073 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.243083 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.243093 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.243103 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.243113 | orchestrator | 2025-05-04 00:56:39.243124 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-04 00:56:39.243134 | orchestrator | Sunday 04 May 2025 00:44:07 +0000 (0:00:01.000) 0:00:39.255 ************ 2025-05-04 00:56:39.243145 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.243155 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.243166 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.243176 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.243186 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.243258 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.243274 | orchestrator | 2025-05-04 00:56:39.243284 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-04 00:56:39.243295 | orchestrator | Sunday 04 May 2025 00:44:08 +0000 (0:00:01.204) 0:00:40.460 ************ 2025-05-04 00:56:39.243305 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.243315 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.243325 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.243335 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.243346 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.243356 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.243366 | orchestrator | 2025-05-04 00:56:39.243376 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-04 00:56:39.243387 | orchestrator | Sunday 04 May 2025 00:44:09 +0000 (0:00:00.956) 0:00:41.416 ************ 2025-05-04 00:56:39.243397 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.243407 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.243417 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.243428 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.243438 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.243448 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.243458 | orchestrator | 2025-05-04 00:56:39.243468 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-04 00:56:39.243485 | orchestrator | Sunday 04 May 2025 00:44:10 +0000 (0:00:01.530) 0:00:42.947 ************ 2025-05-04 00:56:39.243495 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.243505 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.243516 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.243526 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.243536 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.243546 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.243561 | orchestrator | 2025-05-04 00:56:39.243572 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-04 00:56:39.243583 | orchestrator | Sunday 04 May 2025 00:44:12 +0000 (0:00:01.469) 0:00:44.417 ************ 2025-05-04 00:56:39.243593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.243604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.243615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.243625 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-04 00:56:39.243635 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-04 00:56:39.243646 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-04 00:56:39.243656 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.243667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-04 00:56:39.243677 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-04 00:56:39.243712 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-04 00:56:39.243723 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.243733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:56:39.243743 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.243754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:56:39.243808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:56:39.243821 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.243832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:56:39.243842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:56:39.243852 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:56:39.243862 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:56:39.243873 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:56:39.243883 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.243895 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:56:39.243906 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.243918 | orchestrator | 2025-05-04 00:56:39.243929 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-04 00:56:39.243940 | orchestrator | Sunday 04 May 2025 00:44:14 +0000 (0:00:02.217) 0:00:46.634 ************ 2025-05-04 00:56:39.243952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.243964 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-04 00:56:39.243975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.243985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-04 00:56:39.244002 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-04 00:56:39.244013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:56:39.244023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.244034 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-04 00:56:39.244044 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-04 00:56:39.244054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:56:39.244064 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.244078 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.244087 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:56:39.244096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-04 00:56:39.244104 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.244113 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:56:39.244122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:56:39.244130 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.244139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:56:39.244148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:56:39.244213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:56:39.244226 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.244235 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:56:39.244244 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.244253 | orchestrator | 2025-05-04 00:56:39.244262 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-04 00:56:39.244270 | orchestrator | Sunday 04 May 2025 00:44:16 +0000 (0:00:02.070) 0:00:48.704 ************ 2025-05-04 00:56:39.244279 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.244288 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-04 00:56:39.244297 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-04 00:56:39.244306 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-04 00:56:39.244315 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-04 00:56:39.244323 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-04 00:56:39.244332 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-04 00:56:39.244341 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-04 00:56:39.244349 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-04 00:56:39.244358 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-04 00:56:39.244367 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-04 00:56:39.244375 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-04 00:56:39.244384 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-04 00:56:39.244392 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-04 00:56:39.244401 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-04 00:56:39.244410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-04 00:56:39.244419 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-04 00:56:39.244427 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-04 00:56:39.244436 | orchestrator | 2025-05-04 00:56:39.244444 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-04 00:56:39.244453 | orchestrator | Sunday 04 May 2025 00:44:22 +0000 (0:00:05.634) 0:00:54.338 ************ 2025-05-04 00:56:39.244462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.244471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.244480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.244488 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.244497 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-04 00:56:39.244506 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-04 00:56:39.244514 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-04 00:56:39.244523 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-04 00:56:39.244532 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-04 00:56:39.244541 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-04 00:56:39.244549 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.244567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:56:39.244576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:56:39.244584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:56:39.244593 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.244602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:56:39.244611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:56:39.244619 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:56:39.244628 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.244637 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.244645 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:56:39.244654 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:56:39.244662 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:56:39.244671 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.244680 | orchestrator | 2025-05-04 00:56:39.244689 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-04 00:56:39.244697 | orchestrator | Sunday 04 May 2025 00:44:23 +0000 (0:00:01.350) 0:00:55.688 ************ 2025-05-04 00:56:39.244706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.244718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.244727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.244735 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-04 00:56:39.244744 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-04 00:56:39.244753 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.244762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-04 00:56:39.244783 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-04 00:56:39.244792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-04 00:56:39.244800 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-04 00:56:39.244809 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.244818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:56:39.244826 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.244835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:56:39.244845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:56:39.244854 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:56:39.244913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:56:39.244926 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:56:39.244936 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.244946 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.244956 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:56:39.244967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:56:39.244977 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:56:39.244986 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.244996 | orchestrator | 2025-05-04 00:56:39.245006 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-04 00:56:39.245016 | orchestrator | Sunday 04 May 2025 00:44:24 +0000 (0:00:01.511) 0:00:57.200 ************ 2025-05-04 00:56:39.245026 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-04 00:56:39.245036 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:56:39.245045 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:56:39.245061 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:56:39.245071 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-04 00:56:39.245081 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:56:39.245091 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:56:39.245100 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:56:39.245110 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-04 00:56:39.245120 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:56:39.245130 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:56:39.245140 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:56:39.245150 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:56:39.245160 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:56:39.245170 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:56:39.245180 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245190 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.245199 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:56:39.245212 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:56:39.245221 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:56:39.245230 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.245239 | orchestrator | 2025-05-04 00:56:39.245248 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-04 00:56:39.245257 | orchestrator | Sunday 04 May 2025 00:44:26 +0000 (0:00:01.825) 0:00:59.026 ************ 2025-05-04 00:56:39.245266 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.245275 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.245283 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.245292 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.245301 | orchestrator | 2025-05-04 00:56:39.245310 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.245319 | orchestrator | Sunday 04 May 2025 00:44:28 +0000 (0:00:01.257) 0:01:00.284 ************ 2025-05-04 00:56:39.245328 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245336 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.245345 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.245354 | orchestrator | 2025-05-04 00:56:39.245362 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.245371 | orchestrator | Sunday 04 May 2025 00:44:28 +0000 (0:00:00.578) 0:01:00.862 ************ 2025-05-04 00:56:39.245380 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245403 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.245412 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.245421 | orchestrator | 2025-05-04 00:56:39.245430 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.245439 | orchestrator | Sunday 04 May 2025 00:44:29 +0000 (0:00:00.842) 0:01:01.704 ************ 2025-05-04 00:56:39.245448 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245457 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.245469 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.245478 | orchestrator | 2025-05-04 00:56:39.245487 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.245496 | orchestrator | Sunday 04 May 2025 00:44:29 +0000 (0:00:00.424) 0:01:02.129 ************ 2025-05-04 00:56:39.245505 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.245514 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.245523 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.245531 | orchestrator | 2025-05-04 00:56:39.245540 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.245600 | orchestrator | Sunday 04 May 2025 00:44:30 +0000 (0:00:00.714) 0:01:02.844 ************ 2025-05-04 00:56:39.245612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.245621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.245630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.245638 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245647 | orchestrator | 2025-05-04 00:56:39.245656 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.245665 | orchestrator | Sunday 04 May 2025 00:44:31 +0000 (0:00:00.639) 0:01:03.484 ************ 2025-05-04 00:56:39.245674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.245682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.245691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.245700 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245708 | orchestrator | 2025-05-04 00:56:39.245717 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.245726 | orchestrator | Sunday 04 May 2025 00:44:32 +0000 (0:00:00.885) 0:01:04.369 ************ 2025-05-04 00:56:39.245734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.245743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.245752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.245760 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245787 | orchestrator | 2025-05-04 00:56:39.245796 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.245805 | orchestrator | Sunday 04 May 2025 00:44:33 +0000 (0:00:01.097) 0:01:05.467 ************ 2025-05-04 00:56:39.245814 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.245823 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.245832 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.245840 | orchestrator | 2025-05-04 00:56:39.245849 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.245858 | orchestrator | Sunday 04 May 2025 00:44:33 +0000 (0:00:00.617) 0:01:06.085 ************ 2025-05-04 00:56:39.245867 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-04 00:56:39.245876 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-04 00:56:39.245885 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-04 00:56:39.245897 | orchestrator | 2025-05-04 00:56:39.245906 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.245914 | orchestrator | Sunday 04 May 2025 00:44:35 +0000 (0:00:01.246) 0:01:07.331 ************ 2025-05-04 00:56:39.245923 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245932 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.245941 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.245949 | orchestrator | 2025-05-04 00:56:39.245958 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.245967 | orchestrator | Sunday 04 May 2025 00:44:35 +0000 (0:00:00.521) 0:01:07.852 ************ 2025-05-04 00:56:39.245976 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.245984 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.245993 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.246007 | orchestrator | 2025-05-04 00:56:39.246036 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.246047 | orchestrator | Sunday 04 May 2025 00:44:36 +0000 (0:00:00.703) 0:01:08.555 ************ 2025-05-04 00:56:39.246056 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.246065 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.246074 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.246083 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.246091 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.246100 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.246109 | orchestrator | 2025-05-04 00:56:39.246117 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.246126 | orchestrator | Sunday 04 May 2025 00:44:36 +0000 (0:00:00.617) 0:01:09.173 ************ 2025-05-04 00:56:39.246135 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.246143 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.246152 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.246161 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.246170 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.246179 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.246187 | orchestrator | 2025-05-04 00:56:39.246199 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.246208 | orchestrator | Sunday 04 May 2025 00:44:37 +0000 (0:00:00.971) 0:01:10.144 ************ 2025-05-04 00:56:39.246217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.246226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.246236 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.246246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.246256 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.246266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.246276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.246285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.246295 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.246305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.246365 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.246378 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.246388 | orchestrator | 2025-05-04 00:56:39.246398 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-04 00:56:39.246408 | orchestrator | Sunday 04 May 2025 00:44:38 +0000 (0:00:01.040) 0:01:11.185 ************ 2025-05-04 00:56:39.246418 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.246428 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.246437 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.246447 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.246457 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.246467 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.246477 | orchestrator | 2025-05-04 00:56:39.246487 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-04 00:56:39.246497 | orchestrator | Sunday 04 May 2025 00:44:39 +0000 (0:00:00.947) 0:01:12.133 ************ 2025-05-04 00:56:39.246507 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.246516 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:56:39.246526 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:56:39.246541 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-04 00:56:39.246551 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-04 00:56:39.246561 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-04 00:56:39.246571 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-04 00:56:39.246580 | orchestrator | 2025-05-04 00:56:39.246590 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-04 00:56:39.246600 | orchestrator | Sunday 04 May 2025 00:44:40 +0000 (0:00:00.937) 0:01:13.071 ************ 2025-05-04 00:56:39.246609 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.246618 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:56:39.246627 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:56:39.246635 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-04 00:56:39.246644 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-04 00:56:39.246653 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-04 00:56:39.246662 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-04 00:56:39.246670 | orchestrator | 2025-05-04 00:56:39.246679 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.246688 | orchestrator | Sunday 04 May 2025 00:44:42 +0000 (0:00:01.882) 0:01:14.953 ************ 2025-05-04 00:56:39.246697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.246707 | orchestrator | 2025-05-04 00:56:39.246716 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.246724 | orchestrator | Sunday 04 May 2025 00:44:43 +0000 (0:00:01.136) 0:01:16.089 ************ 2025-05-04 00:56:39.246733 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.246742 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.246751 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.246760 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.246780 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.246789 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.246798 | orchestrator | 2025-05-04 00:56:39.246807 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.246815 | orchestrator | Sunday 04 May 2025 00:44:44 +0000 (0:00:00.840) 0:01:16.929 ************ 2025-05-04 00:56:39.246837 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.246846 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.246855 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.246863 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.246872 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.246881 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.246890 | orchestrator | 2025-05-04 00:56:39.246899 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.246907 | orchestrator | Sunday 04 May 2025 00:44:46 +0000 (0:00:01.680) 0:01:18.610 ************ 2025-05-04 00:56:39.246916 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.246925 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.246934 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.246942 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.246951 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.246959 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.246968 | orchestrator | 2025-05-04 00:56:39.246977 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.246990 | orchestrator | Sunday 04 May 2025 00:44:47 +0000 (0:00:01.345) 0:01:19.956 ************ 2025-05-04 00:56:39.246999 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247008 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247017 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247026 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.247034 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.247043 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.247052 | orchestrator | 2025-05-04 00:56:39.247061 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.247069 | orchestrator | Sunday 04 May 2025 00:44:49 +0000 (0:00:01.627) 0:01:21.583 ************ 2025-05-04 00:56:39.247078 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.247087 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247150 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.247163 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247171 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.247180 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247196 | orchestrator | 2025-05-04 00:56:39.247205 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.247214 | orchestrator | Sunday 04 May 2025 00:44:50 +0000 (0:00:01.342) 0:01:22.925 ************ 2025-05-04 00:56:39.247222 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247231 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247239 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247248 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247257 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247265 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247274 | orchestrator | 2025-05-04 00:56:39.247282 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.247291 | orchestrator | Sunday 04 May 2025 00:44:51 +0000 (0:00:00.985) 0:01:23.910 ************ 2025-05-04 00:56:39.247300 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247308 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247317 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247325 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247334 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247343 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247351 | orchestrator | 2025-05-04 00:56:39.247360 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.247368 | orchestrator | Sunday 04 May 2025 00:44:52 +0000 (0:00:00.982) 0:01:24.893 ************ 2025-05-04 00:56:39.247377 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247386 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247394 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247403 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247411 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247420 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247428 | orchestrator | 2025-05-04 00:56:39.247437 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.247446 | orchestrator | Sunday 04 May 2025 00:44:53 +0000 (0:00:00.509) 0:01:25.403 ************ 2025-05-04 00:56:39.247454 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247463 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247472 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247480 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247489 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247497 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247506 | orchestrator | 2025-05-04 00:56:39.247515 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.247523 | orchestrator | Sunday 04 May 2025 00:44:53 +0000 (0:00:00.768) 0:01:26.171 ************ 2025-05-04 00:56:39.247532 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247540 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247554 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247563 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247572 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247580 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247589 | orchestrator | 2025-05-04 00:56:39.247598 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.247606 | orchestrator | Sunday 04 May 2025 00:44:54 +0000 (0:00:00.787) 0:01:26.959 ************ 2025-05-04 00:56:39.247615 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.247624 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.247633 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.247641 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.247650 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.247659 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.247667 | orchestrator | 2025-05-04 00:56:39.247676 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.247685 | orchestrator | Sunday 04 May 2025 00:44:55 +0000 (0:00:01.210) 0:01:28.170 ************ 2025-05-04 00:56:39.247694 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247702 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247711 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247719 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247728 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247736 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247745 | orchestrator | 2025-05-04 00:56:39.247754 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.247762 | orchestrator | Sunday 04 May 2025 00:44:56 +0000 (0:00:00.611) 0:01:28.781 ************ 2025-05-04 00:56:39.247782 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.247793 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.247803 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.247812 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.247822 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.247832 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.247842 | orchestrator | 2025-05-04 00:56:39.247851 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.247861 | orchestrator | Sunday 04 May 2025 00:44:57 +0000 (0:00:00.975) 0:01:29.756 ************ 2025-05-04 00:56:39.247871 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247880 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247891 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247900 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.247915 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.247926 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.247936 | orchestrator | 2025-05-04 00:56:39.247946 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.247955 | orchestrator | Sunday 04 May 2025 00:44:58 +0000 (0:00:00.571) 0:01:30.328 ************ 2025-05-04 00:56:39.247965 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.247975 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.247984 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.247994 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.248004 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.248013 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.248023 | orchestrator | 2025-05-04 00:56:39.248032 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.248090 | orchestrator | Sunday 04 May 2025 00:44:59 +0000 (0:00:00.948) 0:01:31.276 ************ 2025-05-04 00:56:39.248102 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248112 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248122 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248131 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.248141 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.248150 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.248163 | orchestrator | 2025-05-04 00:56:39.248172 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.248181 | orchestrator | Sunday 04 May 2025 00:44:59 +0000 (0:00:00.793) 0:01:32.070 ************ 2025-05-04 00:56:39.248190 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248198 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248207 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248216 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248224 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248233 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248241 | orchestrator | 2025-05-04 00:56:39.248250 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.248259 | orchestrator | Sunday 04 May 2025 00:45:00 +0000 (0:00:01.004) 0:01:33.075 ************ 2025-05-04 00:56:39.248268 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248276 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248285 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248293 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248302 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248311 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248319 | orchestrator | 2025-05-04 00:56:39.248328 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.248337 | orchestrator | Sunday 04 May 2025 00:45:01 +0000 (0:00:00.632) 0:01:33.707 ************ 2025-05-04 00:56:39.248345 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.248354 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.248363 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.248371 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248380 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248388 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248397 | orchestrator | 2025-05-04 00:56:39.248406 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.248415 | orchestrator | Sunday 04 May 2025 00:45:02 +0000 (0:00:00.881) 0:01:34.589 ************ 2025-05-04 00:56:39.248423 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.248432 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.248441 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.248449 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.248458 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.248467 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.248475 | orchestrator | 2025-05-04 00:56:39.248495 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.248508 | orchestrator | Sunday 04 May 2025 00:45:03 +0000 (0:00:00.642) 0:01:35.232 ************ 2025-05-04 00:56:39.248517 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248525 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248534 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248543 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248552 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248560 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248569 | orchestrator | 2025-05-04 00:56:39.248578 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.248587 | orchestrator | Sunday 04 May 2025 00:45:04 +0000 (0:00:00.976) 0:01:36.209 ************ 2025-05-04 00:56:39.248595 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248604 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248613 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248621 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248630 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248638 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248650 | orchestrator | 2025-05-04 00:56:39.248659 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.248668 | orchestrator | Sunday 04 May 2025 00:45:05 +0000 (0:00:01.014) 0:01:37.223 ************ 2025-05-04 00:56:39.248681 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248690 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248698 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248707 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248716 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248724 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248733 | orchestrator | 2025-05-04 00:56:39.248742 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.248750 | orchestrator | Sunday 04 May 2025 00:45:05 +0000 (0:00:00.882) 0:01:38.105 ************ 2025-05-04 00:56:39.248759 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248806 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248816 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248825 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248833 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248841 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248850 | orchestrator | 2025-05-04 00:56:39.248859 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.248868 | orchestrator | Sunday 04 May 2025 00:45:06 +0000 (0:00:00.986) 0:01:39.092 ************ 2025-05-04 00:56:39.248877 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248885 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248894 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248902 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248911 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248920 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.248928 | orchestrator | 2025-05-04 00:56:39.248937 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.248946 | orchestrator | Sunday 04 May 2025 00:45:07 +0000 (0:00:00.648) 0:01:39.741 ************ 2025-05-04 00:56:39.248954 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.248963 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.248971 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.248980 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.248989 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.248997 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249006 | orchestrator | 2025-05-04 00:56:39.249066 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.249079 | orchestrator | Sunday 04 May 2025 00:45:08 +0000 (0:00:00.817) 0:01:40.558 ************ 2025-05-04 00:56:39.249088 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249096 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249105 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249113 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249122 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249130 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249139 | orchestrator | 2025-05-04 00:56:39.249148 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.249157 | orchestrator | Sunday 04 May 2025 00:45:08 +0000 (0:00:00.624) 0:01:41.182 ************ 2025-05-04 00:56:39.249166 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249175 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249183 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249192 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249201 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249210 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249218 | orchestrator | 2025-05-04 00:56:39.249227 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.249236 | orchestrator | Sunday 04 May 2025 00:45:09 +0000 (0:00:00.847) 0:01:42.030 ************ 2025-05-04 00:56:39.249245 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249253 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249267 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249276 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249285 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249293 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249302 | orchestrator | 2025-05-04 00:56:39.249311 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.249320 | orchestrator | Sunday 04 May 2025 00:45:10 +0000 (0:00:00.628) 0:01:42.659 ************ 2025-05-04 00:56:39.249328 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249337 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249345 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249353 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249361 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249369 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249377 | orchestrator | 2025-05-04 00:56:39.249385 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.249393 | orchestrator | Sunday 04 May 2025 00:45:11 +0000 (0:00:00.830) 0:01:43.489 ************ 2025-05-04 00:56:39.249401 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249410 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249418 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249430 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249438 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249446 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249455 | orchestrator | 2025-05-04 00:56:39.249463 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.249471 | orchestrator | Sunday 04 May 2025 00:45:11 +0000 (0:00:00.647) 0:01:44.137 ************ 2025-05-04 00:56:39.249479 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249487 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249496 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249504 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249512 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249520 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249528 | orchestrator | 2025-05-04 00:56:39.249536 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.249544 | orchestrator | Sunday 04 May 2025 00:45:12 +0000 (0:00:00.969) 0:01:45.106 ************ 2025-05-04 00:56:39.249553 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.249561 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.249569 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249577 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.249586 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.249594 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249602 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.249610 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.249618 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249626 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.249634 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.249642 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249650 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.249658 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.249667 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249675 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.249685 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.249694 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249703 | orchestrator | 2025-05-04 00:56:39.249712 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.249721 | orchestrator | Sunday 04 May 2025 00:45:13 +0000 (0:00:00.850) 0:01:45.957 ************ 2025-05-04 00:56:39.249734 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-04 00:56:39.249744 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-04 00:56:39.249753 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249762 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-04 00:56:39.249784 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-04 00:56:39.249793 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.249802 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-04 00:56:39.249812 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-04 00:56:39.249865 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.249877 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-04 00:56:39.249887 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-04 00:56:39.249895 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.249905 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-04 00:56:39.249914 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-04 00:56:39.249923 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.249932 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-04 00:56:39.249941 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-04 00:56:39.249949 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.249959 | orchestrator | 2025-05-04 00:56:39.249967 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.249976 | orchestrator | Sunday 04 May 2025 00:45:15 +0000 (0:00:01.279) 0:01:47.237 ************ 2025-05-04 00:56:39.249985 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.249994 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250003 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250012 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250042 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250051 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250061 | orchestrator | 2025-05-04 00:56:39.250070 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.250078 | orchestrator | Sunday 04 May 2025 00:45:15 +0000 (0:00:00.675) 0:01:47.912 ************ 2025-05-04 00:56:39.250086 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250094 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250102 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250110 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250118 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250126 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250134 | orchestrator | 2025-05-04 00:56:39.250142 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.250151 | orchestrator | Sunday 04 May 2025 00:45:16 +0000 (0:00:01.073) 0:01:48.985 ************ 2025-05-04 00:56:39.250159 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250167 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250175 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250182 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250190 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250198 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250206 | orchestrator | 2025-05-04 00:56:39.250214 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.250222 | orchestrator | Sunday 04 May 2025 00:45:17 +0000 (0:00:00.962) 0:01:49.948 ************ 2025-05-04 00:56:39.250230 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250238 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250246 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250254 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250267 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250275 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250283 | orchestrator | 2025-05-04 00:56:39.250291 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.250299 | orchestrator | Sunday 04 May 2025 00:45:18 +0000 (0:00:01.000) 0:01:50.949 ************ 2025-05-04 00:56:39.250307 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250315 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250323 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250331 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250352 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250360 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250372 | orchestrator | 2025-05-04 00:56:39.250384 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.250392 | orchestrator | Sunday 04 May 2025 00:45:19 +0000 (0:00:00.679) 0:01:51.628 ************ 2025-05-04 00:56:39.250400 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250408 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250416 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250424 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250432 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250440 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250448 | orchestrator | 2025-05-04 00:56:39.250456 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.250464 | orchestrator | Sunday 04 May 2025 00:45:20 +0000 (0:00:01.017) 0:01:52.645 ************ 2025-05-04 00:56:39.250473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.250480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.250489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.250497 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250505 | orchestrator | 2025-05-04 00:56:39.250513 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.250521 | orchestrator | Sunday 04 May 2025 00:45:20 +0000 (0:00:00.558) 0:01:53.204 ************ 2025-05-04 00:56:39.250529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.250537 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.250545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.250553 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250561 | orchestrator | 2025-05-04 00:56:39.250569 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.250577 | orchestrator | Sunday 04 May 2025 00:45:21 +0000 (0:00:00.564) 0:01:53.768 ************ 2025-05-04 00:56:39.250585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.250593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.250601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.250656 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250668 | orchestrator | 2025-05-04 00:56:39.250677 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.250686 | orchestrator | Sunday 04 May 2025 00:45:21 +0000 (0:00:00.435) 0:01:54.204 ************ 2025-05-04 00:56:39.250694 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250703 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250712 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250721 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250729 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250738 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250747 | orchestrator | 2025-05-04 00:56:39.250756 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.250776 | orchestrator | Sunday 04 May 2025 00:45:22 +0000 (0:00:00.662) 0:01:54.866 ************ 2025-05-04 00:56:39.250790 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.250799 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250808 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.250817 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250826 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.250834 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250843 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.250851 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250860 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.250869 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250878 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.250886 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250895 | orchestrator | 2025-05-04 00:56:39.250904 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.250913 | orchestrator | Sunday 04 May 2025 00:45:23 +0000 (0:00:01.149) 0:01:56.015 ************ 2025-05-04 00:56:39.250922 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.250931 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.250939 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.250948 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.250957 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.250965 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.250974 | orchestrator | 2025-05-04 00:56:39.250983 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.250991 | orchestrator | Sunday 04 May 2025 00:45:24 +0000 (0:00:00.615) 0:01:56.630 ************ 2025-05-04 00:56:39.251001 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251009 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251018 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251026 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251035 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251044 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251052 | orchestrator | 2025-05-04 00:56:39.251063 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.251075 | orchestrator | Sunday 04 May 2025 00:45:25 +0000 (0:00:00.844) 0:01:57.475 ************ 2025-05-04 00:56:39.251089 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.251100 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251112 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.251123 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251134 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.251147 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.251158 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251170 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251183 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.251197 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251212 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.251225 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251234 | orchestrator | 2025-05-04 00:56:39.251242 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.251251 | orchestrator | Sunday 04 May 2025 00:45:26 +0000 (0:00:00.798) 0:01:58.274 ************ 2025-05-04 00:56:39.251259 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251267 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251275 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251283 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.251292 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251313 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.251323 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251331 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.251340 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251350 | orchestrator | 2025-05-04 00:56:39.251359 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.251368 | orchestrator | Sunday 04 May 2025 00:45:27 +0000 (0:00:00.943) 0:01:59.217 ************ 2025-05-04 00:56:39.251377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.251386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.251395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.251404 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251413 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-04 00:56:39.251422 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-04 00:56:39.251431 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-04 00:56:39.251440 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251453 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-04 00:56:39.251517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-04 00:56:39.251529 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-04 00:56:39.251538 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.251556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.251565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.251574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.251583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.251592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.251601 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251611 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251620 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.251629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.251657 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.251666 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251674 | orchestrator | 2025-05-04 00:56:39.251682 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.251690 | orchestrator | Sunday 04 May 2025 00:45:28 +0000 (0:00:01.621) 0:02:00.839 ************ 2025-05-04 00:56:39.251699 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251711 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251720 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251728 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251736 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251744 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251752 | orchestrator | 2025-05-04 00:56:39.251760 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.251781 | orchestrator | Sunday 04 May 2025 00:45:30 +0000 (0:00:01.374) 0:02:02.214 ************ 2025-05-04 00:56:39.251789 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251797 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251805 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251814 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.251822 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251830 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.251843 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251852 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.251860 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251868 | orchestrator | 2025-05-04 00:56:39.251876 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.251884 | orchestrator | Sunday 04 May 2025 00:45:31 +0000 (0:00:01.375) 0:02:03.589 ************ 2025-05-04 00:56:39.251892 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251900 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251908 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.251916 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.251924 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.251932 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.251940 | orchestrator | 2025-05-04 00:56:39.251948 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.251970 | orchestrator | Sunday 04 May 2025 00:45:32 +0000 (0:00:01.263) 0:02:04.852 ************ 2025-05-04 00:56:39.251978 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.251986 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.251994 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.252002 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.252010 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.252018 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.252026 | orchestrator | 2025-05-04 00:56:39.252034 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-04 00:56:39.252042 | orchestrator | Sunday 04 May 2025 00:45:34 +0000 (0:00:01.357) 0:02:06.210 ************ 2025-05-04 00:56:39.252050 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.252058 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.252066 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.252075 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.252083 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.252091 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.252099 | orchestrator | 2025-05-04 00:56:39.252110 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-04 00:56:39.252119 | orchestrator | Sunday 04 May 2025 00:45:35 +0000 (0:00:01.809) 0:02:08.020 ************ 2025-05-04 00:56:39.252127 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.252135 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.252143 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.252151 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.252159 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.252167 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.252175 | orchestrator | 2025-05-04 00:56:39.252183 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-04 00:56:39.252191 | orchestrator | Sunday 04 May 2025 00:45:37 +0000 (0:00:02.097) 0:02:10.117 ************ 2025-05-04 00:56:39.252199 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.252208 | orchestrator | 2025-05-04 00:56:39.252216 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-04 00:56:39.252225 | orchestrator | Sunday 04 May 2025 00:45:39 +0000 (0:00:01.288) 0:02:11.405 ************ 2025-05-04 00:56:39.252233 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.252241 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.252249 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.252257 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.252265 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.252273 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.252281 | orchestrator | 2025-05-04 00:56:39.252339 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-04 00:56:39.252357 | orchestrator | Sunday 04 May 2025 00:45:40 +0000 (0:00:00.973) 0:02:12.379 ************ 2025-05-04 00:56:39.252365 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.252373 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.252381 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.252389 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.252398 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.252406 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.252414 | orchestrator | 2025-05-04 00:56:39.252422 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-04 00:56:39.252430 | orchestrator | Sunday 04 May 2025 00:45:40 +0000 (0:00:00.675) 0:02:13.054 ************ 2025-05-04 00:56:39.252438 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-04 00:56:39.252446 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-04 00:56:39.252454 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-04 00:56:39.252462 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-04 00:56:39.252470 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-04 00:56:39.252478 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-04 00:56:39.252487 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-04 00:56:39.252495 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-04 00:56:39.252503 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-04 00:56:39.252511 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-04 00:56:39.252519 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-04 00:56:39.252527 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-04 00:56:39.252535 | orchestrator | 2025-05-04 00:56:39.252543 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-04 00:56:39.252551 | orchestrator | Sunday 04 May 2025 00:45:42 +0000 (0:00:01.714) 0:02:14.769 ************ 2025-05-04 00:56:39.252559 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.252567 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.252575 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.252587 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.252595 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.252603 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.252611 | orchestrator | 2025-05-04 00:56:39.252620 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-04 00:56:39.252628 | orchestrator | Sunday 04 May 2025 00:45:43 +0000 (0:00:01.320) 0:02:16.090 ************ 2025-05-04 00:56:39.252636 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.252644 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.252652 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.252660 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.252668 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.252676 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.252684 | orchestrator | 2025-05-04 00:56:39.252692 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-04 00:56:39.252700 | orchestrator | Sunday 04 May 2025 00:45:44 +0000 (0:00:01.083) 0:02:17.173 ************ 2025-05-04 00:56:39.252709 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.252717 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.252725 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.252733 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.252740 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.252748 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.252762 | orchestrator | 2025-05-04 00:56:39.252806 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-04 00:56:39.252814 | orchestrator | Sunday 04 May 2025 00:45:45 +0000 (0:00:00.631) 0:02:17.804 ************ 2025-05-04 00:56:39.252822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.252831 | orchestrator | 2025-05-04 00:56:39.252839 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-04 00:56:39.252847 | orchestrator | Sunday 04 May 2025 00:45:46 +0000 (0:00:01.392) 0:02:19.197 ************ 2025-05-04 00:56:39.252855 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.252864 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.252872 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.252880 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.252887 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.252894 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.252901 | orchestrator | 2025-05-04 00:56:39.252911 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-04 00:56:39.252919 | orchestrator | Sunday 04 May 2025 00:46:33 +0000 (0:00:46.139) 0:03:05.336 ************ 2025-05-04 00:56:39.252926 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-04 00:56:39.252933 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-04 00:56:39.252941 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-04 00:56:39.252948 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.252955 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-04 00:56:39.252962 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-04 00:56:39.253010 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-04 00:56:39.253021 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253029 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-04 00:56:39.253037 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-04 00:56:39.253045 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-04 00:56:39.253053 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253061 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-04 00:56:39.253069 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-04 00:56:39.253077 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-04 00:56:39.253084 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253092 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-04 00:56:39.253100 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-04 00:56:39.253108 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-04 00:56:39.253116 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253124 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-04 00:56:39.253132 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-04 00:56:39.253140 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-04 00:56:39.253148 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253156 | orchestrator | 2025-05-04 00:56:39.253164 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-04 00:56:39.253172 | orchestrator | Sunday 04 May 2025 00:46:34 +0000 (0:00:01.084) 0:03:06.420 ************ 2025-05-04 00:56:39.253180 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253192 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253200 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253208 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253216 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253223 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253231 | orchestrator | 2025-05-04 00:56:39.253239 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-04 00:56:39.253247 | orchestrator | Sunday 04 May 2025 00:46:34 +0000 (0:00:00.762) 0:03:07.183 ************ 2025-05-04 00:56:39.253255 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253262 | orchestrator | 2025-05-04 00:56:39.253270 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-04 00:56:39.253278 | orchestrator | Sunday 04 May 2025 00:46:35 +0000 (0:00:00.177) 0:03:07.360 ************ 2025-05-04 00:56:39.253286 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253294 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253302 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253310 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253318 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253326 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253334 | orchestrator | 2025-05-04 00:56:39.253341 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-04 00:56:39.253348 | orchestrator | Sunday 04 May 2025 00:46:36 +0000 (0:00:00.944) 0:03:08.305 ************ 2025-05-04 00:56:39.253355 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253362 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253369 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253376 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253383 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253390 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253397 | orchestrator | 2025-05-04 00:56:39.253404 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-04 00:56:39.253411 | orchestrator | Sunday 04 May 2025 00:46:36 +0000 (0:00:00.829) 0:03:09.135 ************ 2025-05-04 00:56:39.253418 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253425 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253432 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253439 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253446 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253467 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253475 | orchestrator | 2025-05-04 00:56:39.253482 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-04 00:56:39.253492 | orchestrator | Sunday 04 May 2025 00:46:38 +0000 (0:00:01.082) 0:03:10.217 ************ 2025-05-04 00:56:39.253499 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.253506 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.253513 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.253521 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.253528 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.253535 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.253541 | orchestrator | 2025-05-04 00:56:39.253549 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-04 00:56:39.253556 | orchestrator | Sunday 04 May 2025 00:46:40 +0000 (0:00:02.135) 0:03:12.353 ************ 2025-05-04 00:56:39.253563 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.253570 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.253577 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.253584 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.253591 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.253598 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.253605 | orchestrator | 2025-05-04 00:56:39.253612 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-04 00:56:39.253619 | orchestrator | Sunday 04 May 2025 00:46:40 +0000 (0:00:00.670) 0:03:13.023 ************ 2025-05-04 00:56:39.253634 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.253642 | orchestrator | 2025-05-04 00:56:39.253690 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-04 00:56:39.253701 | orchestrator | Sunday 04 May 2025 00:46:41 +0000 (0:00:01.162) 0:03:14.186 ************ 2025-05-04 00:56:39.253709 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253717 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253725 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253732 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253740 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253748 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253756 | orchestrator | 2025-05-04 00:56:39.253774 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-04 00:56:39.253781 | orchestrator | Sunday 04 May 2025 00:46:42 +0000 (0:00:00.770) 0:03:14.956 ************ 2025-05-04 00:56:39.253788 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253795 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253802 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253809 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253816 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253823 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253830 | orchestrator | 2025-05-04 00:56:39.253837 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-04 00:56:39.253844 | orchestrator | Sunday 04 May 2025 00:46:43 +0000 (0:00:00.623) 0:03:15.580 ************ 2025-05-04 00:56:39.253851 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253858 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253865 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253873 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253879 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253886 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253893 | orchestrator | 2025-05-04 00:56:39.253900 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-04 00:56:39.253907 | orchestrator | Sunday 04 May 2025 00:46:44 +0000 (0:00:00.751) 0:03:16.332 ************ 2025-05-04 00:56:39.253914 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253921 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253928 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.253935 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.253942 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.253949 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.253956 | orchestrator | 2025-05-04 00:56:39.253963 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-04 00:56:39.253970 | orchestrator | Sunday 04 May 2025 00:46:44 +0000 (0:00:00.573) 0:03:16.905 ************ 2025-05-04 00:56:39.253981 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.253988 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.253995 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.254002 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.254009 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.254032 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.254041 | orchestrator | 2025-05-04 00:56:39.254048 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-04 00:56:39.254055 | orchestrator | Sunday 04 May 2025 00:46:45 +0000 (0:00:01.021) 0:03:17.926 ************ 2025-05-04 00:56:39.254062 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.254069 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.254076 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.254083 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.254090 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.254097 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.254108 | orchestrator | 2025-05-04 00:56:39.254115 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-04 00:56:39.254123 | orchestrator | Sunday 04 May 2025 00:46:46 +0000 (0:00:00.845) 0:03:18.771 ************ 2025-05-04 00:56:39.254130 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.254137 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.254147 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.254154 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.254161 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.254168 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.254175 | orchestrator | 2025-05-04 00:56:39.254182 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-04 00:56:39.254189 | orchestrator | Sunday 04 May 2025 00:46:47 +0000 (0:00:01.034) 0:03:19.806 ************ 2025-05-04 00:56:39.254196 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.254203 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.254210 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.254217 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.254224 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.254231 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.254238 | orchestrator | 2025-05-04 00:56:39.254245 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.254253 | orchestrator | Sunday 04 May 2025 00:46:48 +0000 (0:00:01.352) 0:03:21.158 ************ 2025-05-04 00:56:39.254260 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.254267 | orchestrator | 2025-05-04 00:56:39.254275 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-04 00:56:39.254281 | orchestrator | Sunday 04 May 2025 00:46:50 +0000 (0:00:01.317) 0:03:22.475 ************ 2025-05-04 00:56:39.254289 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-04 00:56:39.254296 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-04 00:56:39.254303 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-04 00:56:39.254310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-04 00:56:39.254317 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-04 00:56:39.254324 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-04 00:56:39.254332 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-04 00:56:39.254340 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-04 00:56:39.254388 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-04 00:56:39.254398 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-04 00:56:39.254406 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-04 00:56:39.254414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-04 00:56:39.254422 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-04 00:56:39.254430 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-04 00:56:39.254438 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-04 00:56:39.254446 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-04 00:56:39.254454 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-04 00:56:39.254462 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-04 00:56:39.254469 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-04 00:56:39.254477 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-04 00:56:39.254485 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-04 00:56:39.254493 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-04 00:56:39.254501 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-04 00:56:39.254514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-04 00:56:39.254522 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-04 00:56:39.254530 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-04 00:56:39.254538 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-04 00:56:39.254546 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-04 00:56:39.254554 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-04 00:56:39.254562 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-04 00:56:39.254570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-04 00:56:39.254578 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-04 00:56:39.254586 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-04 00:56:39.254594 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-04 00:56:39.254602 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-04 00:56:39.254610 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-04 00:56:39.254620 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-04 00:56:39.254628 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-04 00:56:39.254636 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-04 00:56:39.254645 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-04 00:56:39.254652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-04 00:56:39.254660 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-04 00:56:39.254668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-04 00:56:39.254675 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-04 00:56:39.254684 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-04 00:56:39.254691 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-04 00:56:39.254698 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-04 00:56:39.254705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-04 00:56:39.254712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-04 00:56:39.254719 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-04 00:56:39.254726 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-04 00:56:39.254733 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-04 00:56:39.254740 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-04 00:56:39.254747 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-04 00:56:39.254754 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-04 00:56:39.254761 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-04 00:56:39.254780 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-04 00:56:39.254787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-04 00:56:39.254794 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-04 00:56:39.254801 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-04 00:56:39.254808 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-04 00:56:39.254825 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-04 00:56:39.254832 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-04 00:56:39.254839 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-04 00:56:39.254868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-04 00:56:39.254877 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-04 00:56:39.254884 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-04 00:56:39.254932 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-04 00:56:39.254942 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-04 00:56:39.254950 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-04 00:56:39.254957 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-04 00:56:39.254964 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-04 00:56:39.254971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-04 00:56:39.254978 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-04 00:56:39.254985 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-04 00:56:39.254992 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-04 00:56:39.254999 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-04 00:56:39.255006 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-04 00:56:39.255013 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-04 00:56:39.255020 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-04 00:56:39.255027 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-04 00:56:39.255035 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-04 00:56:39.255042 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-04 00:56:39.255049 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-04 00:56:39.255056 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-04 00:56:39.255062 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-04 00:56:39.255069 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-04 00:56:39.255076 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-04 00:56:39.255083 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-04 00:56:39.255090 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-04 00:56:39.255097 | orchestrator | 2025-05-04 00:56:39.255104 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.255114 | orchestrator | Sunday 04 May 2025 00:46:56 +0000 (0:00:06.077) 0:03:28.552 ************ 2025-05-04 00:56:39.255122 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255129 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255136 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255144 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.255151 | orchestrator | 2025-05-04 00:56:39.255158 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-04 00:56:39.255165 | orchestrator | Sunday 04 May 2025 00:46:57 +0000 (0:00:01.347) 0:03:29.900 ************ 2025-05-04 00:56:39.255172 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.255179 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.255187 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.255194 | orchestrator | 2025-05-04 00:56:39.255201 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-04 00:56:39.255212 | orchestrator | Sunday 04 May 2025 00:46:58 +0000 (0:00:01.249) 0:03:31.150 ************ 2025-05-04 00:56:39.255219 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.255226 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.255234 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.255241 | orchestrator | 2025-05-04 00:56:39.255248 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.255255 | orchestrator | Sunday 04 May 2025 00:47:00 +0000 (0:00:01.304) 0:03:32.454 ************ 2025-05-04 00:56:39.255262 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255269 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255276 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255283 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.255290 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.255297 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.255304 | orchestrator | 2025-05-04 00:56:39.255311 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.255318 | orchestrator | Sunday 04 May 2025 00:47:01 +0000 (0:00:00.932) 0:03:33.386 ************ 2025-05-04 00:56:39.255325 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255333 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255340 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255347 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.255354 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.255361 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.255368 | orchestrator | 2025-05-04 00:56:39.255375 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.255382 | orchestrator | Sunday 04 May 2025 00:47:01 +0000 (0:00:00.589) 0:03:33.976 ************ 2025-05-04 00:56:39.255389 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255431 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255441 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255449 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.255456 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.255463 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.255471 | orchestrator | 2025-05-04 00:56:39.255478 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.255485 | orchestrator | Sunday 04 May 2025 00:47:02 +0000 (0:00:00.727) 0:03:34.703 ************ 2025-05-04 00:56:39.255492 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255499 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255507 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255514 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.255521 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.255528 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.255536 | orchestrator | 2025-05-04 00:56:39.255543 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.255550 | orchestrator | Sunday 04 May 2025 00:47:03 +0000 (0:00:00.504) 0:03:35.208 ************ 2025-05-04 00:56:39.255557 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255564 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255571 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255578 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.255586 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.255593 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.255600 | orchestrator | 2025-05-04 00:56:39.255607 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.255614 | orchestrator | Sunday 04 May 2025 00:47:03 +0000 (0:00:00.708) 0:03:35.916 ************ 2025-05-04 00:56:39.255626 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255633 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255640 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255647 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.255654 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.255661 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.255669 | orchestrator | 2025-05-04 00:56:39.255676 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.255683 | orchestrator | Sunday 04 May 2025 00:47:04 +0000 (0:00:00.553) 0:03:36.469 ************ 2025-05-04 00:56:39.255691 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255698 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255705 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255716 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.255724 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.255732 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.255739 | orchestrator | 2025-05-04 00:56:39.255746 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.255762 | orchestrator | Sunday 04 May 2025 00:47:04 +0000 (0:00:00.681) 0:03:37.151 ************ 2025-05-04 00:56:39.255780 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255788 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255796 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255804 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.255812 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.255820 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.255828 | orchestrator | 2025-05-04 00:56:39.255836 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.255844 | orchestrator | Sunday 04 May 2025 00:47:05 +0000 (0:00:00.544) 0:03:37.695 ************ 2025-05-04 00:56:39.255852 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255860 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255867 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255875 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.255883 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.255891 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.255898 | orchestrator | 2025-05-04 00:56:39.255906 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.255914 | orchestrator | Sunday 04 May 2025 00:47:07 +0000 (0:00:02.181) 0:03:39.876 ************ 2025-05-04 00:56:39.255922 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.255930 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.255937 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.255945 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.255953 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.255961 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.255969 | orchestrator | 2025-05-04 00:56:39.255976 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.255984 | orchestrator | Sunday 04 May 2025 00:47:08 +0000 (0:00:00.639) 0:03:40.516 ************ 2025-05-04 00:56:39.255992 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.256000 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.256008 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256016 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.256027 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.256035 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256043 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.256051 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.256059 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256067 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.256075 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.256087 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.256095 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.256103 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.256111 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.256119 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.256127 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.256134 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.256142 | orchestrator | 2025-05-04 00:56:39.256150 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.256198 | orchestrator | Sunday 04 May 2025 00:47:09 +0000 (0:00:01.083) 0:03:41.599 ************ 2025-05-04 00:56:39.256208 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-04 00:56:39.256220 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-04 00:56:39.256228 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256236 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-04 00:56:39.256243 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-04 00:56:39.256251 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256259 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-04 00:56:39.256267 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-04 00:56:39.256275 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-04 00:56:39.256283 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-04 00:56:39.256290 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256298 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-04 00:56:39.256306 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-04 00:56:39.256314 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-04 00:56:39.256322 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-04 00:56:39.256329 | orchestrator | 2025-05-04 00:56:39.256337 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.256345 | orchestrator | Sunday 04 May 2025 00:47:10 +0000 (0:00:00.778) 0:03:42.378 ************ 2025-05-04 00:56:39.256353 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256360 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256368 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256376 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.256384 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.256392 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.256399 | orchestrator | 2025-05-04 00:56:39.256407 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.256415 | orchestrator | Sunday 04 May 2025 00:47:11 +0000 (0:00:01.050) 0:03:43.429 ************ 2025-05-04 00:56:39.256423 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256431 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256439 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256446 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.256454 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.256462 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.256469 | orchestrator | 2025-05-04 00:56:39.256477 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.256485 | orchestrator | Sunday 04 May 2025 00:47:11 +0000 (0:00:00.615) 0:03:44.044 ************ 2025-05-04 00:56:39.256493 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256501 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256509 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256516 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.256524 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.256532 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.256544 | orchestrator | 2025-05-04 00:56:39.256552 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.256560 | orchestrator | Sunday 04 May 2025 00:47:12 +0000 (0:00:00.916) 0:03:44.961 ************ 2025-05-04 00:56:39.256568 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256576 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256588 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256596 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.256604 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.256611 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.256619 | orchestrator | 2025-05-04 00:56:39.256630 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.256637 | orchestrator | Sunday 04 May 2025 00:47:13 +0000 (0:00:00.671) 0:03:45.633 ************ 2025-05-04 00:56:39.256645 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256653 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256661 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256669 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.256676 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.256684 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.256692 | orchestrator | 2025-05-04 00:56:39.256700 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.256707 | orchestrator | Sunday 04 May 2025 00:47:14 +0000 (0:00:01.013) 0:03:46.646 ************ 2025-05-04 00:56:39.256715 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256723 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.256731 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.256739 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.256746 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.256754 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.256762 | orchestrator | 2025-05-04 00:56:39.256801 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.256809 | orchestrator | Sunday 04 May 2025 00:47:15 +0000 (0:00:00.742) 0:03:47.389 ************ 2025-05-04 00:56:39.256816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.256823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.256830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.256837 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256845 | orchestrator | 2025-05-04 00:56:39.256852 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.256859 | orchestrator | Sunday 04 May 2025 00:47:16 +0000 (0:00:00.964) 0:03:48.353 ************ 2025-05-04 00:56:39.256866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.256873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.256880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.256887 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.256894 | orchestrator | 2025-05-04 00:56:39.257420 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.257457 | orchestrator | Sunday 04 May 2025 00:47:16 +0000 (0:00:00.408) 0:03:48.761 ************ 2025-05-04 00:56:39.257464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.257471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.257477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.257483 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257490 | orchestrator | 2025-05-04 00:56:39.257496 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.257503 | orchestrator | Sunday 04 May 2025 00:47:16 +0000 (0:00:00.421) 0:03:49.183 ************ 2025-05-04 00:56:39.257509 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257515 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.257531 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.257537 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.257544 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.257550 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.257556 | orchestrator | 2025-05-04 00:56:39.257562 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.257569 | orchestrator | Sunday 04 May 2025 00:47:17 +0000 (0:00:00.703) 0:03:49.886 ************ 2025-05-04 00:56:39.257575 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.257582 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.257590 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257597 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.257604 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.257611 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.257618 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-04 00:56:39.257625 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-04 00:56:39.257632 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-04 00:56:39.257639 | orchestrator | 2025-05-04 00:56:39.257646 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.257653 | orchestrator | Sunday 04 May 2025 00:47:18 +0000 (0:00:01.317) 0:03:51.204 ************ 2025-05-04 00:56:39.257660 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257667 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.257674 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.257681 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.257688 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.257695 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.257702 | orchestrator | 2025-05-04 00:56:39.257709 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.257716 | orchestrator | Sunday 04 May 2025 00:47:19 +0000 (0:00:00.605) 0:03:51.809 ************ 2025-05-04 00:56:39.257722 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257729 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.257736 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.257743 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.257751 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.257758 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.257779 | orchestrator | 2025-05-04 00:56:39.257791 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.257798 | orchestrator | Sunday 04 May 2025 00:47:20 +0000 (0:00:00.765) 0:03:52.574 ************ 2025-05-04 00:56:39.257805 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.257812 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257819 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.257826 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.257833 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.257840 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.257847 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.257854 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.257876 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.257883 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.257890 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.257897 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.257904 | orchestrator | 2025-05-04 00:56:39.257911 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.257918 | orchestrator | Sunday 04 May 2025 00:47:21 +0000 (0:00:00.736) 0:03:53.311 ************ 2025-05-04 00:56:39.257925 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.257932 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.257944 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.257956 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.257964 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.257970 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.257977 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.257983 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.257989 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.257996 | orchestrator | 2025-05-04 00:56:39.258002 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.258008 | orchestrator | Sunday 04 May 2025 00:47:21 +0000 (0:00:00.880) 0:03:54.191 ************ 2025-05-04 00:56:39.258031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.258039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.258045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.258052 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.258058 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-04 00:56:39.258120 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-04 00:56:39.258130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-04 00:56:39.258136 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.258143 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-04 00:56:39.258149 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-04 00:56:39.258155 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-04 00:56:39.258161 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.258168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.258174 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.258181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.258187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.258193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.258200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.258206 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.258212 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258218 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.258224 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.258231 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.258237 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.258243 | orchestrator | 2025-05-04 00:56:39.258249 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.258256 | orchestrator | Sunday 04 May 2025 00:47:23 +0000 (0:00:01.571) 0:03:55.762 ************ 2025-05-04 00:56:39.258262 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.258269 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.258275 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.258281 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.258287 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.258294 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.258300 | orchestrator | 2025-05-04 00:56:39.258306 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-04 00:56:39.258313 | orchestrator | Sunday 04 May 2025 00:47:27 +0000 (0:00:03.989) 0:03:59.752 ************ 2025-05-04 00:56:39.258319 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.258325 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.258331 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.258342 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.258348 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.258354 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.258361 | orchestrator | 2025-05-04 00:56:39.258367 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-04 00:56:39.258373 | orchestrator | Sunday 04 May 2025 00:47:28 +0000 (0:00:01.031) 0:04:00.784 ************ 2025-05-04 00:56:39.258379 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258385 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.258392 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.258398 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.258404 | orchestrator | 2025-05-04 00:56:39.258411 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-04 00:56:39.258417 | orchestrator | Sunday 04 May 2025 00:47:29 +0000 (0:00:01.063) 0:04:01.848 ************ 2025-05-04 00:56:39.258424 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.258430 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.258436 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.258442 | orchestrator | 2025-05-04 00:56:39.258453 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-04 00:56:39.258460 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.258466 | orchestrator | 2025-05-04 00:56:39.258472 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-04 00:56:39.258479 | orchestrator | Sunday 04 May 2025 00:47:30 +0000 (0:00:01.012) 0:04:02.861 ************ 2025-05-04 00:56:39.258485 | orchestrator | 2025-05-04 00:56:39.258491 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-04 00:56:39.258498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.258504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.258510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.258516 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258523 | orchestrator | 2025-05-04 00:56:39.258529 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-04 00:56:39.258535 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.258541 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.258548 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.258554 | orchestrator | 2025-05-04 00:56:39.258560 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-04 00:56:39.258567 | orchestrator | Sunday 04 May 2025 00:47:31 +0000 (0:00:01.140) 0:04:04.001 ************ 2025-05-04 00:56:39.258573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.258582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.258589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.258595 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.258601 | orchestrator | 2025-05-04 00:56:39.258607 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-04 00:56:39.258614 | orchestrator | Sunday 04 May 2025 00:47:32 +0000 (0:00:00.750) 0:04:04.752 ************ 2025-05-04 00:56:39.258620 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.258626 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.258633 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.258639 | orchestrator | 2025-05-04 00:56:39.258645 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-04 00:56:39.258683 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258692 | orchestrator | 2025-05-04 00:56:39.258699 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-04 00:56:39.258706 | orchestrator | Sunday 04 May 2025 00:47:33 +0000 (0:00:00.686) 0:04:05.438 ************ 2025-05-04 00:56:39.258716 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.258723 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.258729 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.258735 | orchestrator | 2025-05-04 00:56:39.258742 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-04 00:56:39.258748 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258754 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.258761 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.258807 | orchestrator | 2025-05-04 00:56:39.258814 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-04 00:56:39.258820 | orchestrator | Sunday 04 May 2025 00:47:33 +0000 (0:00:00.543) 0:04:05.981 ************ 2025-05-04 00:56:39.258827 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.258833 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.258839 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.258846 | orchestrator | 2025-05-04 00:56:39.258852 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-04 00:56:39.258858 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258864 | orchestrator | 2025-05-04 00:56:39.258871 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-04 00:56:39.258877 | orchestrator | Sunday 04 May 2025 00:47:34 +0000 (0:00:00.686) 0:04:06.668 ************ 2025-05-04 00:56:39.258883 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.258890 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.258896 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.258902 | orchestrator | 2025-05-04 00:56:39.258908 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-04 00:56:39.258915 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258924 | orchestrator | 2025-05-04 00:56:39.258931 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-04 00:56:39.258937 | orchestrator | Sunday 04 May 2025 00:47:35 +0000 (0:00:00.711) 0:04:07.379 ************ 2025-05-04 00:56:39.258944 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.258950 | orchestrator | 2025-05-04 00:56:39.258956 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-04 00:56:39.258963 | orchestrator | Sunday 04 May 2025 00:47:35 +0000 (0:00:00.129) 0:04:07.508 ************ 2025-05-04 00:56:39.258969 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.258975 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.258981 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.258987 | orchestrator | 2025-05-04 00:56:39.258994 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-04 00:56:39.259000 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259006 | orchestrator | 2025-05-04 00:56:39.259013 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-04 00:56:39.259019 | orchestrator | Sunday 04 May 2025 00:47:36 +0000 (0:00:00.826) 0:04:08.335 ************ 2025-05-04 00:56:39.259025 | orchestrator | 2025-05-04 00:56:39.259032 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-04 00:56:39.259038 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.259051 | orchestrator | 2025-05-04 00:56:39.259057 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-04 00:56:39.259063 | orchestrator | Sunday 04 May 2025 00:47:36 +0000 (0:00:00.796) 0:04:09.131 ************ 2025-05-04 00:56:39.259070 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.259076 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.259082 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.259088 | orchestrator | 2025-05-04 00:56:39.259095 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-04 00:56:39.259112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.259123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.259129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.259136 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259142 | orchestrator | 2025-05-04 00:56:39.259149 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-04 00:56:39.259157 | orchestrator | Sunday 04 May 2025 00:47:38 +0000 (0:00:01.164) 0:04:10.295 ************ 2025-05-04 00:56:39.259164 | orchestrator | 2025-05-04 00:56:39.259170 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-04 00:56:39.259176 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259183 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.259189 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.259195 | orchestrator | 2025-05-04 00:56:39.259201 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-04 00:56:39.259208 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.259214 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.259220 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.259227 | orchestrator | 2025-05-04 00:56:39.259233 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-04 00:56:39.259239 | orchestrator | Sunday 04 May 2025 00:47:39 +0000 (0:00:01.382) 0:04:11.678 ************ 2025-05-04 00:56:39.259245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.259254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.259264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.259271 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.259277 | orchestrator | 2025-05-04 00:56:39.259284 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-04 00:56:39.259290 | orchestrator | Sunday 04 May 2025 00:47:40 +0000 (0:00:00.962) 0:04:12.640 ************ 2025-05-04 00:56:39.259296 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.259302 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.259309 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.259315 | orchestrator | 2025-05-04 00:56:39.259363 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-04 00:56:39.259372 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259379 | orchestrator | 2025-05-04 00:56:39.259386 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-04 00:56:39.259393 | orchestrator | Sunday 04 May 2025 00:47:41 +0000 (0:00:01.075) 0:04:13.716 ************ 2025-05-04 00:56:39.259400 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.259407 | orchestrator | 2025-05-04 00:56:39.259414 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-04 00:56:39.259421 | orchestrator | Sunday 04 May 2025 00:47:42 +0000 (0:00:00.622) 0:04:14.339 ************ 2025-05-04 00:56:39.259427 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.259434 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.259441 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.259448 | orchestrator | 2025-05-04 00:56:39.259454 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-04 00:56:39.259461 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.259468 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.259475 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.259482 | orchestrator | 2025-05-04 00:56:39.259489 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-04 00:56:39.259495 | orchestrator | Sunday 04 May 2025 00:47:43 +0000 (0:00:01.255) 0:04:15.594 ************ 2025-05-04 00:56:39.259502 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.259509 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.259516 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.259522 | orchestrator | 2025-05-04 00:56:39.259533 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.259540 | orchestrator | Sunday 04 May 2025 00:47:44 +0000 (0:00:01.249) 0:04:16.844 ************ 2025-05-04 00:56:39.259547 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.259553 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.259560 | orchestrator | 2025-05-04 00:56:39.259567 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-04 00:56:39.259574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.259580 | orchestrator | 2025-05-04 00:56:39.259587 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.259594 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.259600 | orchestrator | 2025-05-04 00:56:39.259607 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-04 00:56:39.259614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.259620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.259627 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259634 | orchestrator | 2025-05-04 00:56:39.259641 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-04 00:56:39.259647 | orchestrator | Sunday 04 May 2025 00:47:46 +0000 (0:00:01.544) 0:04:18.389 ************ 2025-05-04 00:56:39.259654 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.259661 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.259668 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.259675 | orchestrator | 2025-05-04 00:56:39.259682 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-04 00:56:39.259688 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.975) 0:04:19.364 ************ 2025-05-04 00:56:39.259698 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.259704 | orchestrator | 2025-05-04 00:56:39.259711 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-04 00:56:39.259717 | orchestrator | Sunday 04 May 2025 00:47:47 +0000 (0:00:00.565) 0:04:19.930 ************ 2025-05-04 00:56:39.259723 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.259729 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.259735 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.259741 | orchestrator | 2025-05-04 00:56:39.259747 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-04 00:56:39.259753 | orchestrator | Sunday 04 May 2025 00:47:48 +0000 (0:00:00.558) 0:04:20.489 ************ 2025-05-04 00:56:39.259759 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.259778 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.259785 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.259791 | orchestrator | 2025-05-04 00:56:39.259797 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-04 00:56:39.259803 | orchestrator | Sunday 04 May 2025 00:47:49 +0000 (0:00:01.294) 0:04:21.783 ************ 2025-05-04 00:56:39.259809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.259815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.259820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.259826 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259832 | orchestrator | 2025-05-04 00:56:39.259838 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-04 00:56:39.259844 | orchestrator | Sunday 04 May 2025 00:47:50 +0000 (0:00:00.665) 0:04:22.449 ************ 2025-05-04 00:56:39.259850 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.259856 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.259862 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.259868 | orchestrator | 2025-05-04 00:56:39.259877 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-04 00:56:39.259883 | orchestrator | Sunday 04 May 2025 00:47:50 +0000 (0:00:00.404) 0:04:22.853 ************ 2025-05-04 00:56:39.259895 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259901 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.259907 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.259913 | orchestrator | 2025-05-04 00:56:39.259919 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-04 00:56:39.259925 | orchestrator | Sunday 04 May 2025 00:47:51 +0000 (0:00:00.463) 0:04:23.316 ************ 2025-05-04 00:56:39.259931 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.259938 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.259982 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.259991 | orchestrator | 2025-05-04 00:56:39.259997 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-04 00:56:39.260003 | orchestrator | Sunday 04 May 2025 00:47:51 +0000 (0:00:00.837) 0:04:24.154 ************ 2025-05-04 00:56:39.260009 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.260016 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.260022 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.260028 | orchestrator | 2025-05-04 00:56:39.260034 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.260040 | orchestrator | Sunday 04 May 2025 00:47:52 +0000 (0:00:00.469) 0:04:24.624 ************ 2025-05-04 00:56:39.260046 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.260052 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.260058 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.260064 | orchestrator | 2025-05-04 00:56:39.260070 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-04 00:56:39.260076 | orchestrator | 2025-05-04 00:56:39.260082 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.260088 | orchestrator | Sunday 04 May 2025 00:47:54 +0000 (0:00:02.156) 0:04:26.781 ************ 2025-05-04 00:56:39.260094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.260100 | orchestrator | 2025-05-04 00:56:39.260106 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.260112 | orchestrator | Sunday 04 May 2025 00:47:55 +0000 (0:00:00.653) 0:04:27.434 ************ 2025-05-04 00:56:39.260118 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.260124 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.260130 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.260136 | orchestrator | 2025-05-04 00:56:39.260142 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.260148 | orchestrator | Sunday 04 May 2025 00:47:55 +0000 (0:00:00.769) 0:04:28.204 ************ 2025-05-04 00:56:39.260154 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260160 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260166 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260172 | orchestrator | 2025-05-04 00:56:39.260177 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.260184 | orchestrator | Sunday 04 May 2025 00:47:56 +0000 (0:00:00.487) 0:04:28.691 ************ 2025-05-04 00:56:39.260189 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260195 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260202 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260207 | orchestrator | 2025-05-04 00:56:39.260213 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.260219 | orchestrator | Sunday 04 May 2025 00:47:56 +0000 (0:00:00.295) 0:04:28.987 ************ 2025-05-04 00:56:39.260226 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260232 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260237 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260243 | orchestrator | 2025-05-04 00:56:39.260249 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.260259 | orchestrator | Sunday 04 May 2025 00:47:57 +0000 (0:00:00.304) 0:04:29.292 ************ 2025-05-04 00:56:39.260265 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.260271 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.260277 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.260283 | orchestrator | 2025-05-04 00:56:39.260289 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.260295 | orchestrator | Sunday 04 May 2025 00:47:57 +0000 (0:00:00.708) 0:04:30.000 ************ 2025-05-04 00:56:39.260301 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260307 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260313 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260319 | orchestrator | 2025-05-04 00:56:39.260325 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.260331 | orchestrator | Sunday 04 May 2025 00:47:58 +0000 (0:00:00.514) 0:04:30.515 ************ 2025-05-04 00:56:39.260337 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260343 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260348 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260354 | orchestrator | 2025-05-04 00:56:39.260360 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.260366 | orchestrator | Sunday 04 May 2025 00:47:58 +0000 (0:00:00.322) 0:04:30.838 ************ 2025-05-04 00:56:39.260372 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260378 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260384 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260390 | orchestrator | 2025-05-04 00:56:39.260396 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.260402 | orchestrator | Sunday 04 May 2025 00:47:58 +0000 (0:00:00.294) 0:04:31.132 ************ 2025-05-04 00:56:39.260408 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260414 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260420 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260425 | orchestrator | 2025-05-04 00:56:39.260432 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.260446 | orchestrator | Sunday 04 May 2025 00:47:59 +0000 (0:00:00.340) 0:04:31.473 ************ 2025-05-04 00:56:39.260453 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260459 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260465 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260471 | orchestrator | 2025-05-04 00:56:39.260477 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.260486 | orchestrator | Sunday 04 May 2025 00:47:59 +0000 (0:00:00.582) 0:04:32.055 ************ 2025-05-04 00:56:39.260492 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.260498 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.260504 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.260510 | orchestrator | 2025-05-04 00:56:39.260516 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.260555 | orchestrator | Sunday 04 May 2025 00:48:00 +0000 (0:00:00.799) 0:04:32.855 ************ 2025-05-04 00:56:39.260563 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260569 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260575 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260581 | orchestrator | 2025-05-04 00:56:39.260587 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.260593 | orchestrator | Sunday 04 May 2025 00:48:00 +0000 (0:00:00.337) 0:04:33.193 ************ 2025-05-04 00:56:39.260599 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.260605 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.260611 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.260617 | orchestrator | 2025-05-04 00:56:39.260623 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.260629 | orchestrator | Sunday 04 May 2025 00:48:01 +0000 (0:00:00.363) 0:04:33.556 ************ 2025-05-04 00:56:39.260638 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260645 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260651 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260657 | orchestrator | 2025-05-04 00:56:39.260663 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.260669 | orchestrator | Sunday 04 May 2025 00:48:01 +0000 (0:00:00.618) 0:04:34.174 ************ 2025-05-04 00:56:39.260675 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260684 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260690 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260696 | orchestrator | 2025-05-04 00:56:39.260702 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.260708 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:00.339) 0:04:34.514 ************ 2025-05-04 00:56:39.260715 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260720 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260726 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260732 | orchestrator | 2025-05-04 00:56:39.260738 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.260744 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:00.330) 0:04:34.844 ************ 2025-05-04 00:56:39.260750 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260756 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260762 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260782 | orchestrator | 2025-05-04 00:56:39.260788 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.260794 | orchestrator | Sunday 04 May 2025 00:48:02 +0000 (0:00:00.322) 0:04:35.166 ************ 2025-05-04 00:56:39.260800 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260807 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260812 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260818 | orchestrator | 2025-05-04 00:56:39.260824 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.260830 | orchestrator | Sunday 04 May 2025 00:48:03 +0000 (0:00:00.577) 0:04:35.743 ************ 2025-05-04 00:56:39.260836 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.260842 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.260848 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.260854 | orchestrator | 2025-05-04 00:56:39.260860 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.260866 | orchestrator | Sunday 04 May 2025 00:48:03 +0000 (0:00:00.372) 0:04:36.116 ************ 2025-05-04 00:56:39.260872 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.260878 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.260884 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.260890 | orchestrator | 2025-05-04 00:56:39.260896 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.260902 | orchestrator | Sunday 04 May 2025 00:48:04 +0000 (0:00:00.444) 0:04:36.561 ************ 2025-05-04 00:56:39.260908 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260914 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260920 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260926 | orchestrator | 2025-05-04 00:56:39.260932 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.260938 | orchestrator | Sunday 04 May 2025 00:48:04 +0000 (0:00:00.354) 0:04:36.915 ************ 2025-05-04 00:56:39.260944 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260950 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260956 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.260962 | orchestrator | 2025-05-04 00:56:39.260968 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.260974 | orchestrator | Sunday 04 May 2025 00:48:05 +0000 (0:00:00.680) 0:04:37.595 ************ 2025-05-04 00:56:39.260980 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.260990 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.260996 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261002 | orchestrator | 2025-05-04 00:56:39.261008 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.261014 | orchestrator | Sunday 04 May 2025 00:48:05 +0000 (0:00:00.372) 0:04:37.968 ************ 2025-05-04 00:56:39.261020 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261027 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261033 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261039 | orchestrator | 2025-05-04 00:56:39.261045 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.261051 | orchestrator | Sunday 04 May 2025 00:48:06 +0000 (0:00:00.383) 0:04:38.352 ************ 2025-05-04 00:56:39.261057 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261063 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261069 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261075 | orchestrator | 2025-05-04 00:56:39.261081 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.261087 | orchestrator | Sunday 04 May 2025 00:48:06 +0000 (0:00:00.365) 0:04:38.717 ************ 2025-05-04 00:56:39.261093 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261099 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261105 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261111 | orchestrator | 2025-05-04 00:56:39.261117 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.261158 | orchestrator | Sunday 04 May 2025 00:48:07 +0000 (0:00:00.677) 0:04:39.394 ************ 2025-05-04 00:56:39.261166 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261173 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261179 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261185 | orchestrator | 2025-05-04 00:56:39.261191 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.261197 | orchestrator | Sunday 04 May 2025 00:48:07 +0000 (0:00:00.346) 0:04:39.741 ************ 2025-05-04 00:56:39.261204 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261210 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261216 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261221 | orchestrator | 2025-05-04 00:56:39.261227 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.261234 | orchestrator | Sunday 04 May 2025 00:48:07 +0000 (0:00:00.304) 0:04:40.045 ************ 2025-05-04 00:56:39.261240 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261246 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261252 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261258 | orchestrator | 2025-05-04 00:56:39.261264 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.261270 | orchestrator | Sunday 04 May 2025 00:48:08 +0000 (0:00:00.300) 0:04:40.346 ************ 2025-05-04 00:56:39.261276 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261282 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261288 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261294 | orchestrator | 2025-05-04 00:56:39.261300 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.261306 | orchestrator | Sunday 04 May 2025 00:48:08 +0000 (0:00:00.463) 0:04:40.809 ************ 2025-05-04 00:56:39.261312 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261318 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261324 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261330 | orchestrator | 2025-05-04 00:56:39.261336 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.261342 | orchestrator | Sunday 04 May 2025 00:48:08 +0000 (0:00:00.338) 0:04:41.148 ************ 2025-05-04 00:56:39.261352 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261361 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261367 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261373 | orchestrator | 2025-05-04 00:56:39.261379 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.261385 | orchestrator | Sunday 04 May 2025 00:48:09 +0000 (0:00:00.325) 0:04:41.473 ************ 2025-05-04 00:56:39.261392 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.261398 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.261404 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261410 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.261416 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.261422 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261427 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.261434 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.261439 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261446 | orchestrator | 2025-05-04 00:56:39.261452 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.261458 | orchestrator | Sunday 04 May 2025 00:48:09 +0000 (0:00:00.319) 0:04:41.793 ************ 2025-05-04 00:56:39.261464 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-04 00:56:39.261470 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-04 00:56:39.261476 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261482 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-04 00:56:39.261488 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-04 00:56:39.261494 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261500 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-04 00:56:39.261506 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-04 00:56:39.261511 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261518 | orchestrator | 2025-05-04 00:56:39.261523 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.261529 | orchestrator | Sunday 04 May 2025 00:48:10 +0000 (0:00:00.534) 0:04:42.327 ************ 2025-05-04 00:56:39.261535 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261541 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261547 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261557 | orchestrator | 2025-05-04 00:56:39.261563 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.261569 | orchestrator | Sunday 04 May 2025 00:48:10 +0000 (0:00:00.332) 0:04:42.659 ************ 2025-05-04 00:56:39.261575 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261581 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261587 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261601 | orchestrator | 2025-05-04 00:56:39.261608 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.261614 | orchestrator | Sunday 04 May 2025 00:48:10 +0000 (0:00:00.486) 0:04:43.146 ************ 2025-05-04 00:56:39.261620 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261626 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261632 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261638 | orchestrator | 2025-05-04 00:56:39.261644 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.261650 | orchestrator | Sunday 04 May 2025 00:48:11 +0000 (0:00:00.488) 0:04:43.634 ************ 2025-05-04 00:56:39.261656 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261662 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261668 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261674 | orchestrator | 2025-05-04 00:56:39.261713 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.261724 | orchestrator | Sunday 04 May 2025 00:48:11 +0000 (0:00:00.298) 0:04:43.933 ************ 2025-05-04 00:56:39.261731 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261737 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261743 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261749 | orchestrator | 2025-05-04 00:56:39.261755 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.261761 | orchestrator | Sunday 04 May 2025 00:48:12 +0000 (0:00:00.334) 0:04:44.268 ************ 2025-05-04 00:56:39.261801 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261812 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261818 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261824 | orchestrator | 2025-05-04 00:56:39.261830 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.261836 | orchestrator | Sunday 04 May 2025 00:48:12 +0000 (0:00:00.306) 0:04:44.574 ************ 2025-05-04 00:56:39.261842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.261848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.261854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.261860 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261866 | orchestrator | 2025-05-04 00:56:39.261872 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.261878 | orchestrator | Sunday 04 May 2025 00:48:13 +0000 (0:00:00.754) 0:04:45.328 ************ 2025-05-04 00:56:39.261884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.261890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.261896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.261902 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261908 | orchestrator | 2025-05-04 00:56:39.261914 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.261920 | orchestrator | Sunday 04 May 2025 00:48:14 +0000 (0:00:01.169) 0:04:46.498 ************ 2025-05-04 00:56:39.261926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.261931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.261937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.261943 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261949 | orchestrator | 2025-05-04 00:56:39.261955 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.261961 | orchestrator | Sunday 04 May 2025 00:48:14 +0000 (0:00:00.504) 0:04:47.003 ************ 2025-05-04 00:56:39.261967 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.261973 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.261979 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.261985 | orchestrator | 2025-05-04 00:56:39.261990 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.261999 | orchestrator | Sunday 04 May 2025 00:48:15 +0000 (0:00:00.285) 0:04:47.288 ************ 2025-05-04 00:56:39.262005 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.262011 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262033 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.262039 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262045 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.262051 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262057 | orchestrator | 2025-05-04 00:56:39.262063 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.262069 | orchestrator | Sunday 04 May 2025 00:48:15 +0000 (0:00:00.425) 0:04:47.713 ************ 2025-05-04 00:56:39.262074 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262085 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262091 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262096 | orchestrator | 2025-05-04 00:56:39.262102 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.262108 | orchestrator | Sunday 04 May 2025 00:48:15 +0000 (0:00:00.311) 0:04:48.025 ************ 2025-05-04 00:56:39.262114 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262120 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262126 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262132 | orchestrator | 2025-05-04 00:56:39.262138 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.262143 | orchestrator | Sunday 04 May 2025 00:48:16 +0000 (0:00:00.514) 0:04:48.540 ************ 2025-05-04 00:56:39.262149 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.262155 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262161 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.262166 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262171 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.262177 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262182 | orchestrator | 2025-05-04 00:56:39.262187 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.262193 | orchestrator | Sunday 04 May 2025 00:48:16 +0000 (0:00:00.436) 0:04:48.976 ************ 2025-05-04 00:56:39.262198 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262203 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262209 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262214 | orchestrator | 2025-05-04 00:56:39.262219 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.262224 | orchestrator | Sunday 04 May 2025 00:48:17 +0000 (0:00:00.303) 0:04:49.280 ************ 2025-05-04 00:56:39.262230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.262235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.262240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.262246 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262271 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-04 00:56:39.262277 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-04 00:56:39.262283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-04 00:56:39.262288 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262293 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-04 00:56:39.262303 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-04 00:56:39.262308 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-04 00:56:39.262314 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262319 | orchestrator | 2025-05-04 00:56:39.262324 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.262330 | orchestrator | Sunday 04 May 2025 00:48:17 +0000 (0:00:00.751) 0:04:50.032 ************ 2025-05-04 00:56:39.262335 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262343 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262348 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262354 | orchestrator | 2025-05-04 00:56:39.262359 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.262365 | orchestrator | Sunday 04 May 2025 00:48:18 +0000 (0:00:00.501) 0:04:50.533 ************ 2025-05-04 00:56:39.262370 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262375 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262381 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262386 | orchestrator | 2025-05-04 00:56:39.262391 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.262397 | orchestrator | Sunday 04 May 2025 00:48:18 +0000 (0:00:00.647) 0:04:51.181 ************ 2025-05-04 00:56:39.262405 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262411 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262416 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262422 | orchestrator | 2025-05-04 00:56:39.262427 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.262432 | orchestrator | Sunday 04 May 2025 00:48:19 +0000 (0:00:00.502) 0:04:51.683 ************ 2025-05-04 00:56:39.262438 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262443 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262448 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262454 | orchestrator | 2025-05-04 00:56:39.262459 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-04 00:56:39.262464 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.691) 0:04:52.374 ************ 2025-05-04 00:56:39.262470 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.262475 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.262480 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.262485 | orchestrator | 2025-05-04 00:56:39.262491 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-04 00:56:39.262496 | orchestrator | Sunday 04 May 2025 00:48:20 +0000 (0:00:00.358) 0:04:52.733 ************ 2025-05-04 00:56:39.262502 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.262507 | orchestrator | 2025-05-04 00:56:39.262512 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-04 00:56:39.262518 | orchestrator | Sunday 04 May 2025 00:48:21 +0000 (0:00:00.520) 0:04:53.253 ************ 2025-05-04 00:56:39.262523 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262528 | orchestrator | 2025-05-04 00:56:39.262534 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-04 00:56:39.262539 | orchestrator | Sunday 04 May 2025 00:48:21 +0000 (0:00:00.325) 0:04:53.578 ************ 2025-05-04 00:56:39.262544 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-04 00:56:39.262550 | orchestrator | 2025-05-04 00:56:39.262555 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-04 00:56:39.262560 | orchestrator | Sunday 04 May 2025 00:48:22 +0000 (0:00:00.731) 0:04:54.310 ************ 2025-05-04 00:56:39.262566 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.262571 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.262576 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.262582 | orchestrator | 2025-05-04 00:56:39.262587 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-04 00:56:39.262592 | orchestrator | Sunday 04 May 2025 00:48:22 +0000 (0:00:00.556) 0:04:54.867 ************ 2025-05-04 00:56:39.262597 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.262603 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.262608 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.262613 | orchestrator | 2025-05-04 00:56:39.262619 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-04 00:56:39.262626 | orchestrator | Sunday 04 May 2025 00:48:23 +0000 (0:00:00.417) 0:04:55.284 ************ 2025-05-04 00:56:39.262632 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.262637 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.262642 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.262648 | orchestrator | 2025-05-04 00:56:39.262653 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-04 00:56:39.262659 | orchestrator | Sunday 04 May 2025 00:48:24 +0000 (0:00:01.508) 0:04:56.793 ************ 2025-05-04 00:56:39.262664 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.262670 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.262675 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.262681 | orchestrator | 2025-05-04 00:56:39.262686 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-04 00:56:39.262694 | orchestrator | Sunday 04 May 2025 00:48:25 +0000 (0:00:00.829) 0:04:57.623 ************ 2025-05-04 00:56:39.262700 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.262705 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.262710 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.262715 | orchestrator | 2025-05-04 00:56:39.262721 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-04 00:56:39.262726 | orchestrator | Sunday 04 May 2025 00:48:26 +0000 (0:00:00.681) 0:04:58.304 ************ 2025-05-04 00:56:39.262731 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.262737 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.262742 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.262747 | orchestrator | 2025-05-04 00:56:39.262778 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-04 00:56:39.262786 | orchestrator | Sunday 04 May 2025 00:48:26 +0000 (0:00:00.704) 0:04:59.009 ************ 2025-05-04 00:56:39.262791 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262797 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262802 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262808 | orchestrator | 2025-05-04 00:56:39.262813 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-04 00:56:39.262819 | orchestrator | Sunday 04 May 2025 00:48:27 +0000 (0:00:00.617) 0:04:59.626 ************ 2025-05-04 00:56:39.262824 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.262829 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.262835 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.262840 | orchestrator | 2025-05-04 00:56:39.262846 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-04 00:56:39.262851 | orchestrator | Sunday 04 May 2025 00:48:27 +0000 (0:00:00.362) 0:04:59.989 ************ 2025-05-04 00:56:39.262856 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262862 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262867 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262873 | orchestrator | 2025-05-04 00:56:39.262878 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-04 00:56:39.262884 | orchestrator | Sunday 04 May 2025 00:48:28 +0000 (0:00:00.329) 0:05:00.318 ************ 2025-05-04 00:56:39.262889 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.262894 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.262900 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.262905 | orchestrator | 2025-05-04 00:56:39.262910 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-04 00:56:39.262916 | orchestrator | Sunday 04 May 2025 00:48:28 +0000 (0:00:00.336) 0:05:00.655 ************ 2025-05-04 00:56:39.262921 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.262927 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.262932 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.262937 | orchestrator | 2025-05-04 00:56:39.262943 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-04 00:56:39.262948 | orchestrator | Sunday 04 May 2025 00:48:29 +0000 (0:00:01.542) 0:05:02.197 ************ 2025-05-04 00:56:39.262953 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.262959 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.262964 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.262969 | orchestrator | 2025-05-04 00:56:39.262975 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-04 00:56:39.262980 | orchestrator | Sunday 04 May 2025 00:48:30 +0000 (0:00:00.380) 0:05:02.577 ************ 2025-05-04 00:56:39.262986 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.262991 | orchestrator | 2025-05-04 00:56:39.262997 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-04 00:56:39.263002 | orchestrator | Sunday 04 May 2025 00:48:30 +0000 (0:00:00.562) 0:05:03.140 ************ 2025-05-04 00:56:39.263008 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263022 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263028 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263034 | orchestrator | 2025-05-04 00:56:39.263039 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-04 00:56:39.263044 | orchestrator | Sunday 04 May 2025 00:48:31 +0000 (0:00:00.569) 0:05:03.709 ************ 2025-05-04 00:56:39.263050 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263055 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263061 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263066 | orchestrator | 2025-05-04 00:56:39.263071 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-04 00:56:39.263077 | orchestrator | Sunday 04 May 2025 00:48:31 +0000 (0:00:00.337) 0:05:04.046 ************ 2025-05-04 00:56:39.263082 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.263088 | orchestrator | 2025-05-04 00:56:39.263093 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-04 00:56:39.263098 | orchestrator | Sunday 04 May 2025 00:48:32 +0000 (0:00:00.602) 0:05:04.649 ************ 2025-05-04 00:56:39.263104 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263109 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263115 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263120 | orchestrator | 2025-05-04 00:56:39.263125 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-04 00:56:39.263131 | orchestrator | Sunday 04 May 2025 00:48:34 +0000 (0:00:01.688) 0:05:06.337 ************ 2025-05-04 00:56:39.263136 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263142 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263147 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263152 | orchestrator | 2025-05-04 00:56:39.263158 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-04 00:56:39.263166 | orchestrator | Sunday 04 May 2025 00:48:35 +0000 (0:00:01.265) 0:05:07.603 ************ 2025-05-04 00:56:39.263171 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263176 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263182 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263187 | orchestrator | 2025-05-04 00:56:39.263193 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-04 00:56:39.263198 | orchestrator | Sunday 04 May 2025 00:48:37 +0000 (0:00:01.729) 0:05:09.332 ************ 2025-05-04 00:56:39.263203 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263209 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263214 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263219 | orchestrator | 2025-05-04 00:56:39.263225 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-04 00:56:39.263230 | orchestrator | Sunday 04 May 2025 00:48:39 +0000 (0:00:02.105) 0:05:11.438 ************ 2025-05-04 00:56:39.263236 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.263241 | orchestrator | 2025-05-04 00:56:39.263259 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-04 00:56:39.263266 | orchestrator | Sunday 04 May 2025 00:48:39 +0000 (0:00:00.633) 0:05:12.072 ************ 2025-05-04 00:56:39.263271 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-04 00:56:39.263277 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263282 | orchestrator | 2025-05-04 00:56:39.263288 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-04 00:56:39.263293 | orchestrator | Sunday 04 May 2025 00:49:01 +0000 (0:00:21.501) 0:05:33.573 ************ 2025-05-04 00:56:39.263298 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263304 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.263309 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.263318 | orchestrator | 2025-05-04 00:56:39.263324 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-04 00:56:39.263329 | orchestrator | Sunday 04 May 2025 00:49:08 +0000 (0:00:07.132) 0:05:40.705 ************ 2025-05-04 00:56:39.263335 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263340 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263345 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263351 | orchestrator | 2025-05-04 00:56:39.263356 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-04 00:56:39.263362 | orchestrator | Sunday 04 May 2025 00:49:09 +0000 (0:00:01.152) 0:05:41.858 ************ 2025-05-04 00:56:39.263367 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263373 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263378 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263383 | orchestrator | 2025-05-04 00:56:39.263389 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-04 00:56:39.263394 | orchestrator | Sunday 04 May 2025 00:49:10 +0000 (0:00:00.665) 0:05:42.523 ************ 2025-05-04 00:56:39.263399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.263405 | orchestrator | 2025-05-04 00:56:39.263410 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-04 00:56:39.263416 | orchestrator | Sunday 04 May 2025 00:49:11 +0000 (0:00:00.789) 0:05:43.313 ************ 2025-05-04 00:56:39.263421 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263426 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.263432 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.263437 | orchestrator | 2025-05-04 00:56:39.263442 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-04 00:56:39.263448 | orchestrator | Sunday 04 May 2025 00:49:11 +0000 (0:00:00.376) 0:05:43.689 ************ 2025-05-04 00:56:39.263453 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263459 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263464 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263470 | orchestrator | 2025-05-04 00:56:39.263475 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-04 00:56:39.263480 | orchestrator | Sunday 04 May 2025 00:49:12 +0000 (0:00:01.199) 0:05:44.889 ************ 2025-05-04 00:56:39.263486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.263491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.263497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.263502 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263507 | orchestrator | 2025-05-04 00:56:39.263513 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-04 00:56:39.263518 | orchestrator | Sunday 04 May 2025 00:49:13 +0000 (0:00:01.180) 0:05:46.070 ************ 2025-05-04 00:56:39.263524 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263529 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.263535 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.263540 | orchestrator | 2025-05-04 00:56:39.263545 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.263551 | orchestrator | Sunday 04 May 2025 00:49:14 +0000 (0:00:00.393) 0:05:46.464 ************ 2025-05-04 00:56:39.263556 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.263561 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.263567 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.263572 | orchestrator | 2025-05-04 00:56:39.263578 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-04 00:56:39.263583 | orchestrator | 2025-05-04 00:56:39.263588 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.263594 | orchestrator | Sunday 04 May 2025 00:49:16 +0000 (0:00:02.185) 0:05:48.649 ************ 2025-05-04 00:56:39.263599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.263607 | orchestrator | 2025-05-04 00:56:39.263613 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.263618 | orchestrator | Sunday 04 May 2025 00:49:17 +0000 (0:00:00.609) 0:05:49.259 ************ 2025-05-04 00:56:39.263624 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263629 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.263635 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.263640 | orchestrator | 2025-05-04 00:56:39.263645 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.263651 | orchestrator | Sunday 04 May 2025 00:49:17 +0000 (0:00:00.698) 0:05:49.957 ************ 2025-05-04 00:56:39.263656 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263661 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263667 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263672 | orchestrator | 2025-05-04 00:56:39.263679 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.263685 | orchestrator | Sunday 04 May 2025 00:49:18 +0000 (0:00:00.278) 0:05:50.236 ************ 2025-05-04 00:56:39.263691 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263696 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263701 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263707 | orchestrator | 2025-05-04 00:56:39.263724 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.263730 | orchestrator | Sunday 04 May 2025 00:49:18 +0000 (0:00:00.454) 0:05:50.690 ************ 2025-05-04 00:56:39.263736 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263743 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263749 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263754 | orchestrator | 2025-05-04 00:56:39.263760 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.263778 | orchestrator | Sunday 04 May 2025 00:49:18 +0000 (0:00:00.276) 0:05:50.967 ************ 2025-05-04 00:56:39.263784 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263789 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.263795 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.263800 | orchestrator | 2025-05-04 00:56:39.263805 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.263811 | orchestrator | Sunday 04 May 2025 00:49:19 +0000 (0:00:00.634) 0:05:51.601 ************ 2025-05-04 00:56:39.263816 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263821 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263827 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263832 | orchestrator | 2025-05-04 00:56:39.263837 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.263843 | orchestrator | Sunday 04 May 2025 00:49:19 +0000 (0:00:00.275) 0:05:51.876 ************ 2025-05-04 00:56:39.263848 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263854 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263859 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263864 | orchestrator | 2025-05-04 00:56:39.263870 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.263875 | orchestrator | Sunday 04 May 2025 00:49:20 +0000 (0:00:00.439) 0:05:52.316 ************ 2025-05-04 00:56:39.263880 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263886 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263891 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263896 | orchestrator | 2025-05-04 00:56:39.263902 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.263907 | orchestrator | Sunday 04 May 2025 00:49:20 +0000 (0:00:00.292) 0:05:52.608 ************ 2025-05-04 00:56:39.263912 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263918 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263923 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263932 | orchestrator | 2025-05-04 00:56:39.263937 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.263943 | orchestrator | Sunday 04 May 2025 00:49:20 +0000 (0:00:00.312) 0:05:52.921 ************ 2025-05-04 00:56:39.263948 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.263953 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.263959 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.263964 | orchestrator | 2025-05-04 00:56:39.263969 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.263975 | orchestrator | Sunday 04 May 2025 00:49:20 +0000 (0:00:00.280) 0:05:53.202 ************ 2025-05-04 00:56:39.263980 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.263986 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.263991 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.263996 | orchestrator | 2025-05-04 00:56:39.264002 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.264007 | orchestrator | Sunday 04 May 2025 00:49:21 +0000 (0:00:00.866) 0:05:54.068 ************ 2025-05-04 00:56:39.264012 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264018 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264023 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264028 | orchestrator | 2025-05-04 00:56:39.264034 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.264039 | orchestrator | Sunday 04 May 2025 00:49:22 +0000 (0:00:00.327) 0:05:54.396 ************ 2025-05-04 00:56:39.264045 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.264050 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.264055 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.264061 | orchestrator | 2025-05-04 00:56:39.264066 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.264072 | orchestrator | Sunday 04 May 2025 00:49:22 +0000 (0:00:00.421) 0:05:54.818 ************ 2025-05-04 00:56:39.264077 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264082 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264088 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264093 | orchestrator | 2025-05-04 00:56:39.264099 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.264104 | orchestrator | Sunday 04 May 2025 00:49:22 +0000 (0:00:00.340) 0:05:55.158 ************ 2025-05-04 00:56:39.264109 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264115 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264120 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264125 | orchestrator | 2025-05-04 00:56:39.264131 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.264136 | orchestrator | Sunday 04 May 2025 00:49:23 +0000 (0:00:00.633) 0:05:55.792 ************ 2025-05-04 00:56:39.264141 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264147 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264152 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264157 | orchestrator | 2025-05-04 00:56:39.264163 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.264168 | orchestrator | Sunday 04 May 2025 00:49:23 +0000 (0:00:00.383) 0:05:56.175 ************ 2025-05-04 00:56:39.264174 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264179 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264184 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264190 | orchestrator | 2025-05-04 00:56:39.264195 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.264203 | orchestrator | Sunday 04 May 2025 00:49:24 +0000 (0:00:00.467) 0:05:56.643 ************ 2025-05-04 00:56:39.264208 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264214 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264219 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264224 | orchestrator | 2025-05-04 00:56:39.264247 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.264253 | orchestrator | Sunday 04 May 2025 00:49:24 +0000 (0:00:00.400) 0:05:57.043 ************ 2025-05-04 00:56:39.264259 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.264264 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.264269 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.264275 | orchestrator | 2025-05-04 00:56:39.264280 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.264285 | orchestrator | Sunday 04 May 2025 00:49:25 +0000 (0:00:00.711) 0:05:57.755 ************ 2025-05-04 00:56:39.264291 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.264296 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.264301 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.264307 | orchestrator | 2025-05-04 00:56:39.264312 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.264318 | orchestrator | Sunday 04 May 2025 00:49:25 +0000 (0:00:00.388) 0:05:58.144 ************ 2025-05-04 00:56:39.264323 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264332 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264338 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264343 | orchestrator | 2025-05-04 00:56:39.264349 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.264354 | orchestrator | Sunday 04 May 2025 00:49:26 +0000 (0:00:00.357) 0:05:58.501 ************ 2025-05-04 00:56:39.264360 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264365 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264370 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264376 | orchestrator | 2025-05-04 00:56:39.264381 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.264386 | orchestrator | Sunday 04 May 2025 00:49:26 +0000 (0:00:00.597) 0:05:59.098 ************ 2025-05-04 00:56:39.264392 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264397 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264403 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264408 | orchestrator | 2025-05-04 00:56:39.264413 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.264419 | orchestrator | Sunday 04 May 2025 00:49:27 +0000 (0:00:00.304) 0:05:59.403 ************ 2025-05-04 00:56:39.264424 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264429 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264435 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264440 | orchestrator | 2025-05-04 00:56:39.264445 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.264451 | orchestrator | Sunday 04 May 2025 00:49:27 +0000 (0:00:00.344) 0:05:59.747 ************ 2025-05-04 00:56:39.264456 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264462 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264467 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264472 | orchestrator | 2025-05-04 00:56:39.264478 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.264483 | orchestrator | Sunday 04 May 2025 00:49:27 +0000 (0:00:00.342) 0:06:00.090 ************ 2025-05-04 00:56:39.264489 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264494 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264500 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264505 | orchestrator | 2025-05-04 00:56:39.264510 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.264516 | orchestrator | Sunday 04 May 2025 00:49:28 +0000 (0:00:00.467) 0:06:00.557 ************ 2025-05-04 00:56:39.264521 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264526 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264531 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264537 | orchestrator | 2025-05-04 00:56:39.264542 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.264551 | orchestrator | Sunday 04 May 2025 00:49:28 +0000 (0:00:00.313) 0:06:00.871 ************ 2025-05-04 00:56:39.264556 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264562 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264567 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264572 | orchestrator | 2025-05-04 00:56:39.264578 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.264583 | orchestrator | Sunday 04 May 2025 00:49:28 +0000 (0:00:00.304) 0:06:01.175 ************ 2025-05-04 00:56:39.264589 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264594 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264599 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264605 | orchestrator | 2025-05-04 00:56:39.264610 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.264616 | orchestrator | Sunday 04 May 2025 00:49:29 +0000 (0:00:00.306) 0:06:01.482 ************ 2025-05-04 00:56:39.264621 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264626 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264632 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264637 | orchestrator | 2025-05-04 00:56:39.264643 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.264648 | orchestrator | Sunday 04 May 2025 00:49:29 +0000 (0:00:00.446) 0:06:01.928 ************ 2025-05-04 00:56:39.264653 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264659 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264664 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264669 | orchestrator | 2025-05-04 00:56:39.264675 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.264680 | orchestrator | Sunday 04 May 2025 00:49:30 +0000 (0:00:00.294) 0:06:02.222 ************ 2025-05-04 00:56:39.264685 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264691 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264696 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264701 | orchestrator | 2025-05-04 00:56:39.264707 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.264712 | orchestrator | Sunday 04 May 2025 00:49:30 +0000 (0:00:00.310) 0:06:02.533 ************ 2025-05-04 00:56:39.264730 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.264736 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.264741 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264747 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.264752 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.264758 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264790 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.264797 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.264802 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264808 | orchestrator | 2025-05-04 00:56:39.264813 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.264819 | orchestrator | Sunday 04 May 2025 00:49:30 +0000 (0:00:00.385) 0:06:02.918 ************ 2025-05-04 00:56:39.264824 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-04 00:56:39.264830 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-04 00:56:39.264835 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264841 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-04 00:56:39.264846 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-04 00:56:39.264851 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264857 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-04 00:56:39.264862 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-04 00:56:39.264871 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264877 | orchestrator | 2025-05-04 00:56:39.264882 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.264888 | orchestrator | Sunday 04 May 2025 00:49:31 +0000 (0:00:00.494) 0:06:03.412 ************ 2025-05-04 00:56:39.264893 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264898 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264904 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264909 | orchestrator | 2025-05-04 00:56:39.264914 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.264922 | orchestrator | Sunday 04 May 2025 00:49:31 +0000 (0:00:00.297) 0:06:03.710 ************ 2025-05-04 00:56:39.264928 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264934 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264939 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264944 | orchestrator | 2025-05-04 00:56:39.264950 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.264955 | orchestrator | Sunday 04 May 2025 00:49:31 +0000 (0:00:00.299) 0:06:04.009 ************ 2025-05-04 00:56:39.264961 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.264966 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.264971 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.264977 | orchestrator | 2025-05-04 00:56:39.264982 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.264988 | orchestrator | Sunday 04 May 2025 00:49:32 +0000 (0:00:00.303) 0:06:04.313 ************ 2025-05-04 00:56:39.264993 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265001 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265006 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265012 | orchestrator | 2025-05-04 00:56:39.265017 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.265022 | orchestrator | Sunday 04 May 2025 00:49:32 +0000 (0:00:00.550) 0:06:04.864 ************ 2025-05-04 00:56:39.265028 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265033 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265039 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265044 | orchestrator | 2025-05-04 00:56:39.265050 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.265055 | orchestrator | Sunday 04 May 2025 00:49:33 +0000 (0:00:00.342) 0:06:05.206 ************ 2025-05-04 00:56:39.265061 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265066 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265071 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265077 | orchestrator | 2025-05-04 00:56:39.265082 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.265088 | orchestrator | Sunday 04 May 2025 00:49:33 +0000 (0:00:00.329) 0:06:05.536 ************ 2025-05-04 00:56:39.265093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.265099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.265104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.265109 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265114 | orchestrator | 2025-05-04 00:56:39.265119 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.265124 | orchestrator | Sunday 04 May 2025 00:49:33 +0000 (0:00:00.383) 0:06:05.920 ************ 2025-05-04 00:56:39.265129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.265134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.265139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.265144 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265149 | orchestrator | 2025-05-04 00:56:39.265157 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.265162 | orchestrator | Sunday 04 May 2025 00:49:34 +0000 (0:00:00.376) 0:06:06.297 ************ 2025-05-04 00:56:39.265167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.265172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.265177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.265182 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265187 | orchestrator | 2025-05-04 00:56:39.265192 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.265211 | orchestrator | Sunday 04 May 2025 00:49:34 +0000 (0:00:00.691) 0:06:06.989 ************ 2025-05-04 00:56:39.265218 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265223 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265230 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265235 | orchestrator | 2025-05-04 00:56:39.265240 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.265245 | orchestrator | Sunday 04 May 2025 00:49:35 +0000 (0:00:00.619) 0:06:07.609 ************ 2025-05-04 00:56:39.265250 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.265255 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265260 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.265265 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265270 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.265274 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265279 | orchestrator | 2025-05-04 00:56:39.265284 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.265289 | orchestrator | Sunday 04 May 2025 00:49:35 +0000 (0:00:00.475) 0:06:08.085 ************ 2025-05-04 00:56:39.265294 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265299 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265304 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265309 | orchestrator | 2025-05-04 00:56:39.265314 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.265319 | orchestrator | Sunday 04 May 2025 00:49:36 +0000 (0:00:00.360) 0:06:08.445 ************ 2025-05-04 00:56:39.265324 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265329 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265333 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265338 | orchestrator | 2025-05-04 00:56:39.265343 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.265348 | orchestrator | Sunday 04 May 2025 00:49:36 +0000 (0:00:00.358) 0:06:08.804 ************ 2025-05-04 00:56:39.265353 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.265358 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265363 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.265368 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265372 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.265377 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265382 | orchestrator | 2025-05-04 00:56:39.265387 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.265392 | orchestrator | Sunday 04 May 2025 00:49:37 +0000 (0:00:00.878) 0:06:09.682 ************ 2025-05-04 00:56:39.265397 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265402 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265407 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265411 | orchestrator | 2025-05-04 00:56:39.265416 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.265421 | orchestrator | Sunday 04 May 2025 00:49:37 +0000 (0:00:00.368) 0:06:10.051 ************ 2025-05-04 00:56:39.265426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.265434 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.265439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.265444 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-04 00:56:39.265454 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-04 00:56:39.265459 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-04 00:56:39.265463 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265468 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-04 00:56:39.265473 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-04 00:56:39.265478 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-04 00:56:39.265483 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265488 | orchestrator | 2025-05-04 00:56:39.265493 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.265498 | orchestrator | Sunday 04 May 2025 00:49:38 +0000 (0:00:00.654) 0:06:10.705 ************ 2025-05-04 00:56:39.265502 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265507 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265512 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265517 | orchestrator | 2025-05-04 00:56:39.265522 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.265527 | orchestrator | Sunday 04 May 2025 00:49:39 +0000 (0:00:00.962) 0:06:11.667 ************ 2025-05-04 00:56:39.265532 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265537 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265541 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265546 | orchestrator | 2025-05-04 00:56:39.265553 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.265558 | orchestrator | Sunday 04 May 2025 00:49:40 +0000 (0:00:00.569) 0:06:12.237 ************ 2025-05-04 00:56:39.265563 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265568 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265573 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265578 | orchestrator | 2025-05-04 00:56:39.265583 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.265588 | orchestrator | Sunday 04 May 2025 00:49:41 +0000 (0:00:01.037) 0:06:13.275 ************ 2025-05-04 00:56:39.265593 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265598 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265603 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265608 | orchestrator | 2025-05-04 00:56:39.265612 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-04 00:56:39.265617 | orchestrator | Sunday 04 May 2025 00:49:42 +0000 (0:00:00.935) 0:06:14.210 ************ 2025-05-04 00:56:39.265622 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:56:39.265638 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:56:39.265644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:56:39.265649 | orchestrator | 2025-05-04 00:56:39.265654 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-04 00:56:39.265659 | orchestrator | Sunday 04 May 2025 00:49:42 +0000 (0:00:00.767) 0:06:14.978 ************ 2025-05-04 00:56:39.265664 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.265669 | orchestrator | 2025-05-04 00:56:39.265674 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-04 00:56:39.265679 | orchestrator | Sunday 04 May 2025 00:49:43 +0000 (0:00:00.615) 0:06:15.593 ************ 2025-05-04 00:56:39.265684 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.265689 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.265698 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.265703 | orchestrator | 2025-05-04 00:56:39.265708 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-04 00:56:39.265713 | orchestrator | Sunday 04 May 2025 00:49:44 +0000 (0:00:00.756) 0:06:16.350 ************ 2025-05-04 00:56:39.265718 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265723 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265728 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265733 | orchestrator | 2025-05-04 00:56:39.265737 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-04 00:56:39.265742 | orchestrator | Sunday 04 May 2025 00:49:44 +0000 (0:00:00.774) 0:06:17.125 ************ 2025-05-04 00:56:39.265747 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 00:56:39.265752 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 00:56:39.265757 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 00:56:39.265762 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-04 00:56:39.265781 | orchestrator | 2025-05-04 00:56:39.265789 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-04 00:56:39.265796 | orchestrator | Sunday 04 May 2025 00:49:52 +0000 (0:00:07.838) 0:06:24.963 ************ 2025-05-04 00:56:39.265804 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.265811 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.265823 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.265831 | orchestrator | 2025-05-04 00:56:39.265836 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-04 00:56:39.265841 | orchestrator | Sunday 04 May 2025 00:49:53 +0000 (0:00:00.617) 0:06:25.581 ************ 2025-05-04 00:56:39.265846 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-04 00:56:39.265851 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-04 00:56:39.265856 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-04 00:56:39.265861 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-04 00:56:39.265866 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:56:39.265871 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:56:39.265876 | orchestrator | 2025-05-04 00:56:39.265881 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-04 00:56:39.265886 | orchestrator | Sunday 04 May 2025 00:49:55 +0000 (0:00:01.814) 0:06:27.396 ************ 2025-05-04 00:56:39.265891 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-04 00:56:39.265896 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-04 00:56:39.265901 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-04 00:56:39.265906 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 00:56:39.265910 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-04 00:56:39.265915 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-04 00:56:39.265920 | orchestrator | 2025-05-04 00:56:39.265925 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-04 00:56:39.265930 | orchestrator | Sunday 04 May 2025 00:49:56 +0000 (0:00:01.259) 0:06:28.655 ************ 2025-05-04 00:56:39.265935 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.265940 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.265945 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.265949 | orchestrator | 2025-05-04 00:56:39.265954 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-04 00:56:39.265959 | orchestrator | Sunday 04 May 2025 00:49:57 +0000 (0:00:00.980) 0:06:29.635 ************ 2025-05-04 00:56:39.265964 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.265969 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.265974 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.265979 | orchestrator | 2025-05-04 00:56:39.265984 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-04 00:56:39.265992 | orchestrator | Sunday 04 May 2025 00:49:57 +0000 (0:00:00.342) 0:06:29.977 ************ 2025-05-04 00:56:39.265997 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.266002 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.266007 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.266024 | orchestrator | 2025-05-04 00:56:39.266031 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-04 00:56:39.266036 | orchestrator | Sunday 04 May 2025 00:49:58 +0000 (0:00:00.323) 0:06:30.300 ************ 2025-05-04 00:56:39.266041 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.266046 | orchestrator | 2025-05-04 00:56:39.266053 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-04 00:56:39.266058 | orchestrator | Sunday 04 May 2025 00:49:58 +0000 (0:00:00.824) 0:06:31.125 ************ 2025-05-04 00:56:39.266063 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.266068 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.266073 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.266078 | orchestrator | 2025-05-04 00:56:39.266083 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-04 00:56:39.266101 | orchestrator | Sunday 04 May 2025 00:49:59 +0000 (0:00:00.369) 0:06:31.494 ************ 2025-05-04 00:56:39.266107 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.266112 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.266117 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.266122 | orchestrator | 2025-05-04 00:56:39.266127 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-04 00:56:39.266132 | orchestrator | Sunday 04 May 2025 00:49:59 +0000 (0:00:00.348) 0:06:31.843 ************ 2025-05-04 00:56:39.266137 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.266141 | orchestrator | 2025-05-04 00:56:39.266146 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-04 00:56:39.266151 | orchestrator | Sunday 04 May 2025 00:50:00 +0000 (0:00:00.919) 0:06:32.763 ************ 2025-05-04 00:56:39.266156 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266161 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266166 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266171 | orchestrator | 2025-05-04 00:56:39.266176 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-04 00:56:39.266181 | orchestrator | Sunday 04 May 2025 00:50:01 +0000 (0:00:01.274) 0:06:34.037 ************ 2025-05-04 00:56:39.266185 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266190 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266195 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266200 | orchestrator | 2025-05-04 00:56:39.266205 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-04 00:56:39.266210 | orchestrator | Sunday 04 May 2025 00:50:02 +0000 (0:00:01.152) 0:06:35.190 ************ 2025-05-04 00:56:39.266215 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266220 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266225 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266230 | orchestrator | 2025-05-04 00:56:39.266235 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-04 00:56:39.266240 | orchestrator | Sunday 04 May 2025 00:50:04 +0000 (0:00:01.966) 0:06:37.156 ************ 2025-05-04 00:56:39.266245 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266250 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266255 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266260 | orchestrator | 2025-05-04 00:56:39.266264 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-04 00:56:39.266269 | orchestrator | Sunday 04 May 2025 00:50:06 +0000 (0:00:01.905) 0:06:39.062 ************ 2025-05-04 00:56:39.266274 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.266282 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.266288 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-04 00:56:39.266293 | orchestrator | 2025-05-04 00:56:39.266297 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-04 00:56:39.266302 | orchestrator | Sunday 04 May 2025 00:50:07 +0000 (0:00:00.670) 0:06:39.732 ************ 2025-05-04 00:56:39.266307 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-04 00:56:39.266313 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-04 00:56:39.266318 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.266323 | orchestrator | 2025-05-04 00:56:39.266328 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-04 00:56:39.266333 | orchestrator | Sunday 04 May 2025 00:50:21 +0000 (0:00:13.763) 0:06:53.496 ************ 2025-05-04 00:56:39.266338 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.266342 | orchestrator | 2025-05-04 00:56:39.266347 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-04 00:56:39.266352 | orchestrator | Sunday 04 May 2025 00:50:22 +0000 (0:00:01.660) 0:06:55.157 ************ 2025-05-04 00:56:39.266366 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.266371 | orchestrator | 2025-05-04 00:56:39.266376 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-04 00:56:39.266381 | orchestrator | Sunday 04 May 2025 00:50:23 +0000 (0:00:00.468) 0:06:55.625 ************ 2025-05-04 00:56:39.266386 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.266391 | orchestrator | 2025-05-04 00:56:39.266395 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-04 00:56:39.266401 | orchestrator | Sunday 04 May 2025 00:50:23 +0000 (0:00:00.313) 0:06:55.939 ************ 2025-05-04 00:56:39.266406 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-04 00:56:39.266410 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-04 00:56:39.266415 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-04 00:56:39.266420 | orchestrator | 2025-05-04 00:56:39.266425 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-04 00:56:39.266434 | orchestrator | Sunday 04 May 2025 00:50:30 +0000 (0:00:06.614) 0:07:02.553 ************ 2025-05-04 00:56:39.266439 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-04 00:56:39.266444 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-04 00:56:39.266449 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-04 00:56:39.266454 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-04 00:56:39.266459 | orchestrator | 2025-05-04 00:56:39.266463 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-04 00:56:39.266468 | orchestrator | Sunday 04 May 2025 00:50:35 +0000 (0:00:05.559) 0:07:08.113 ************ 2025-05-04 00:56:39.266473 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266478 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266483 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266488 | orchestrator | 2025-05-04 00:56:39.266504 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-04 00:56:39.266510 | orchestrator | Sunday 04 May 2025 00:50:36 +0000 (0:00:00.946) 0:07:09.060 ************ 2025-05-04 00:56:39.266515 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:56:39.266520 | orchestrator | 2025-05-04 00:56:39.266525 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-04 00:56:39.266530 | orchestrator | Sunday 04 May 2025 00:50:37 +0000 (0:00:00.595) 0:07:09.655 ************ 2025-05-04 00:56:39.266538 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.266543 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.266548 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.266553 | orchestrator | 2025-05-04 00:56:39.266558 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-04 00:56:39.266563 | orchestrator | Sunday 04 May 2025 00:50:37 +0000 (0:00:00.369) 0:07:10.024 ************ 2025-05-04 00:56:39.266568 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266573 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266578 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266583 | orchestrator | 2025-05-04 00:56:39.266588 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-04 00:56:39.266593 | orchestrator | Sunday 04 May 2025 00:50:39 +0000 (0:00:01.195) 0:07:11.220 ************ 2025-05-04 00:56:39.266598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:56:39.266603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:56:39.266608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:56:39.266613 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.266618 | orchestrator | 2025-05-04 00:56:39.266623 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-04 00:56:39.266628 | orchestrator | Sunday 04 May 2025 00:50:39 +0000 (0:00:00.696) 0:07:11.917 ************ 2025-05-04 00:56:39.266633 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.266638 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.266643 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.266648 | orchestrator | 2025-05-04 00:56:39.266653 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.266658 | orchestrator | Sunday 04 May 2025 00:50:40 +0000 (0:00:00.366) 0:07:12.284 ************ 2025-05-04 00:56:39.266662 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.266667 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.266672 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.266677 | orchestrator | 2025-05-04 00:56:39.266682 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-04 00:56:39.266687 | orchestrator | 2025-05-04 00:56:39.266692 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.266697 | orchestrator | Sunday 04 May 2025 00:50:42 +0000 (0:00:02.270) 0:07:14.554 ************ 2025-05-04 00:56:39.266702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.266707 | orchestrator | 2025-05-04 00:56:39.266712 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.266717 | orchestrator | Sunday 04 May 2025 00:50:42 +0000 (0:00:00.536) 0:07:15.091 ************ 2025-05-04 00:56:39.266722 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.266729 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.266734 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.266739 | orchestrator | 2025-05-04 00:56:39.266744 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.266749 | orchestrator | Sunday 04 May 2025 00:50:43 +0000 (0:00:00.322) 0:07:15.413 ************ 2025-05-04 00:56:39.266753 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.266758 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.266788 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.266794 | orchestrator | 2025-05-04 00:56:39.266799 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.266804 | orchestrator | Sunday 04 May 2025 00:50:44 +0000 (0:00:01.017) 0:07:16.431 ************ 2025-05-04 00:56:39.266809 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.266814 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.266819 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.266824 | orchestrator | 2025-05-04 00:56:39.266829 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.266837 | orchestrator | Sunday 04 May 2025 00:50:44 +0000 (0:00:00.748) 0:07:17.179 ************ 2025-05-04 00:56:39.266842 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.266847 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.266852 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.266856 | orchestrator | 2025-05-04 00:56:39.266861 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.266866 | orchestrator | Sunday 04 May 2025 00:50:45 +0000 (0:00:00.737) 0:07:17.917 ************ 2025-05-04 00:56:39.266871 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.266876 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.266881 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.266886 | orchestrator | 2025-05-04 00:56:39.266891 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.266896 | orchestrator | Sunday 04 May 2025 00:50:46 +0000 (0:00:00.367) 0:07:18.285 ************ 2025-05-04 00:56:39.266901 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.266905 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.266910 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.266915 | orchestrator | 2025-05-04 00:56:39.266922 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.266927 | orchestrator | Sunday 04 May 2025 00:50:46 +0000 (0:00:00.688) 0:07:18.973 ************ 2025-05-04 00:56:39.266932 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.266937 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.266942 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.266947 | orchestrator | 2025-05-04 00:56:39.266952 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.266970 | orchestrator | Sunday 04 May 2025 00:50:47 +0000 (0:00:00.354) 0:07:19.328 ************ 2025-05-04 00:56:39.266976 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.266981 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.266986 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.266991 | orchestrator | 2025-05-04 00:56:39.266996 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.267001 | orchestrator | Sunday 04 May 2025 00:50:47 +0000 (0:00:00.337) 0:07:19.666 ************ 2025-05-04 00:56:39.267006 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267010 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267015 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267020 | orchestrator | 2025-05-04 00:56:39.267025 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.267030 | orchestrator | Sunday 04 May 2025 00:50:47 +0000 (0:00:00.369) 0:07:20.036 ************ 2025-05-04 00:56:39.267035 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267040 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267045 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267050 | orchestrator | 2025-05-04 00:56:39.267055 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.267060 | orchestrator | Sunday 04 May 2025 00:50:48 +0000 (0:00:00.632) 0:07:20.669 ************ 2025-05-04 00:56:39.267065 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.267070 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.267074 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.267079 | orchestrator | 2025-05-04 00:56:39.267084 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.267089 | orchestrator | Sunday 04 May 2025 00:50:49 +0000 (0:00:00.791) 0:07:21.461 ************ 2025-05-04 00:56:39.267094 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267099 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267104 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267109 | orchestrator | 2025-05-04 00:56:39.267114 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.267122 | orchestrator | Sunday 04 May 2025 00:50:49 +0000 (0:00:00.348) 0:07:21.809 ************ 2025-05-04 00:56:39.267127 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267132 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267137 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267142 | orchestrator | 2025-05-04 00:56:39.267147 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.267152 | orchestrator | Sunday 04 May 2025 00:50:49 +0000 (0:00:00.311) 0:07:22.120 ************ 2025-05-04 00:56:39.267157 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.267162 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.267167 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.267172 | orchestrator | 2025-05-04 00:56:39.267177 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.267182 | orchestrator | Sunday 04 May 2025 00:50:50 +0000 (0:00:00.656) 0:07:22.777 ************ 2025-05-04 00:56:39.267187 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.267191 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.267196 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.267201 | orchestrator | 2025-05-04 00:56:39.267206 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.267211 | orchestrator | Sunday 04 May 2025 00:50:50 +0000 (0:00:00.347) 0:07:23.125 ************ 2025-05-04 00:56:39.267216 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.267221 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.267226 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.267231 | orchestrator | 2025-05-04 00:56:39.267235 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.267240 | orchestrator | Sunday 04 May 2025 00:50:51 +0000 (0:00:00.346) 0:07:23.472 ************ 2025-05-04 00:56:39.267245 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267250 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267255 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267260 | orchestrator | 2025-05-04 00:56:39.267265 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.267270 | orchestrator | Sunday 04 May 2025 00:50:51 +0000 (0:00:00.356) 0:07:23.829 ************ 2025-05-04 00:56:39.267275 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267283 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267288 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267293 | orchestrator | 2025-05-04 00:56:39.267298 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.267303 | orchestrator | Sunday 04 May 2025 00:50:52 +0000 (0:00:00.849) 0:07:24.678 ************ 2025-05-04 00:56:39.267308 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267313 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267318 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267322 | orchestrator | 2025-05-04 00:56:39.267327 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.267332 | orchestrator | Sunday 04 May 2025 00:50:52 +0000 (0:00:00.379) 0:07:25.058 ************ 2025-05-04 00:56:39.267337 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.267342 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.267347 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.267352 | orchestrator | 2025-05-04 00:56:39.267357 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.267362 | orchestrator | Sunday 04 May 2025 00:50:53 +0000 (0:00:00.380) 0:07:25.438 ************ 2025-05-04 00:56:39.267367 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267372 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267377 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267382 | orchestrator | 2025-05-04 00:56:39.267386 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.267391 | orchestrator | Sunday 04 May 2025 00:50:53 +0000 (0:00:00.331) 0:07:25.769 ************ 2025-05-04 00:56:39.267399 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267404 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267409 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267414 | orchestrator | 2025-05-04 00:56:39.267421 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.267437 | orchestrator | Sunday 04 May 2025 00:50:54 +0000 (0:00:00.697) 0:07:26.467 ************ 2025-05-04 00:56:39.267443 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267448 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267453 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267458 | orchestrator | 2025-05-04 00:56:39.267463 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.267468 | orchestrator | Sunday 04 May 2025 00:50:54 +0000 (0:00:00.467) 0:07:26.935 ************ 2025-05-04 00:56:39.267473 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267478 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267483 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267488 | orchestrator | 2025-05-04 00:56:39.267493 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.267498 | orchestrator | Sunday 04 May 2025 00:50:55 +0000 (0:00:00.386) 0:07:27.321 ************ 2025-05-04 00:56:39.267503 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267508 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267512 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267517 | orchestrator | 2025-05-04 00:56:39.267522 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.267527 | orchestrator | Sunday 04 May 2025 00:50:55 +0000 (0:00:00.335) 0:07:27.657 ************ 2025-05-04 00:56:39.267532 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267537 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267542 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267547 | orchestrator | 2025-05-04 00:56:39.267552 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.267557 | orchestrator | Sunday 04 May 2025 00:50:56 +0000 (0:00:00.630) 0:07:28.287 ************ 2025-05-04 00:56:39.267562 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267567 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267572 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267576 | orchestrator | 2025-05-04 00:56:39.267581 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.267586 | orchestrator | Sunday 04 May 2025 00:50:56 +0000 (0:00:00.362) 0:07:28.650 ************ 2025-05-04 00:56:39.267591 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267596 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267601 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267606 | orchestrator | 2025-05-04 00:56:39.267611 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.267616 | orchestrator | Sunday 04 May 2025 00:50:56 +0000 (0:00:00.360) 0:07:29.011 ************ 2025-05-04 00:56:39.267621 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267626 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267631 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267636 | orchestrator | 2025-05-04 00:56:39.267641 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.267646 | orchestrator | Sunday 04 May 2025 00:50:57 +0000 (0:00:00.351) 0:07:29.362 ************ 2025-05-04 00:56:39.267651 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267656 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267660 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267665 | orchestrator | 2025-05-04 00:56:39.267670 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.267675 | orchestrator | Sunday 04 May 2025 00:50:57 +0000 (0:00:00.663) 0:07:30.026 ************ 2025-05-04 00:56:39.267683 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267689 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267693 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267698 | orchestrator | 2025-05-04 00:56:39.267703 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.267708 | orchestrator | Sunday 04 May 2025 00:50:58 +0000 (0:00:00.351) 0:07:30.377 ************ 2025-05-04 00:56:39.267713 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267718 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267723 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267728 | orchestrator | 2025-05-04 00:56:39.267732 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.267737 | orchestrator | Sunday 04 May 2025 00:50:58 +0000 (0:00:00.327) 0:07:30.705 ************ 2025-05-04 00:56:39.267743 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.267747 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.267752 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267757 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.267762 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.267780 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267785 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.267789 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.267794 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267799 | orchestrator | 2025-05-04 00:56:39.267804 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.267809 | orchestrator | Sunday 04 May 2025 00:50:58 +0000 (0:00:00.352) 0:07:31.057 ************ 2025-05-04 00:56:39.267814 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-04 00:56:39.267822 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-04 00:56:39.267827 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267832 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-04 00:56:39.267837 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-04 00:56:39.267842 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267847 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-04 00:56:39.267851 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-04 00:56:39.267856 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267861 | orchestrator | 2025-05-04 00:56:39.267866 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.267883 | orchestrator | Sunday 04 May 2025 00:50:59 +0000 (0:00:00.680) 0:07:31.738 ************ 2025-05-04 00:56:39.267889 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267896 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267901 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267906 | orchestrator | 2025-05-04 00:56:39.267911 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.267916 | orchestrator | Sunday 04 May 2025 00:50:59 +0000 (0:00:00.344) 0:07:32.083 ************ 2025-05-04 00:56:39.267921 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267926 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267931 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267936 | orchestrator | 2025-05-04 00:56:39.267941 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.267946 | orchestrator | Sunday 04 May 2025 00:51:00 +0000 (0:00:00.341) 0:07:32.424 ************ 2025-05-04 00:56:39.267951 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267956 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267961 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267969 | orchestrator | 2025-05-04 00:56:39.267974 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.267979 | orchestrator | Sunday 04 May 2025 00:51:00 +0000 (0:00:00.361) 0:07:32.785 ************ 2025-05-04 00:56:39.267984 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.267989 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.267994 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.267999 | orchestrator | 2025-05-04 00:56:39.268003 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.268008 | orchestrator | Sunday 04 May 2025 00:51:01 +0000 (0:00:00.711) 0:07:33.497 ************ 2025-05-04 00:56:39.268013 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268018 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268023 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268028 | orchestrator | 2025-05-04 00:56:39.268033 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.268038 | orchestrator | Sunday 04 May 2025 00:51:01 +0000 (0:00:00.334) 0:07:33.832 ************ 2025-05-04 00:56:39.268043 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268048 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268053 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268058 | orchestrator | 2025-05-04 00:56:39.268062 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.268069 | orchestrator | Sunday 04 May 2025 00:51:01 +0000 (0:00:00.322) 0:07:34.154 ************ 2025-05-04 00:56:39.268075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.268079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.268084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.268089 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268094 | orchestrator | 2025-05-04 00:56:39.268099 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.268104 | orchestrator | Sunday 04 May 2025 00:51:02 +0000 (0:00:00.437) 0:07:34.592 ************ 2025-05-04 00:56:39.268109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.268114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.268119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.268124 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268129 | orchestrator | 2025-05-04 00:56:39.268134 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.268138 | orchestrator | Sunday 04 May 2025 00:51:02 +0000 (0:00:00.426) 0:07:35.019 ************ 2025-05-04 00:56:39.268143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.268148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.268153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.268158 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268163 | orchestrator | 2025-05-04 00:56:39.268168 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.268173 | orchestrator | Sunday 04 May 2025 00:51:03 +0000 (0:00:00.722) 0:07:35.742 ************ 2025-05-04 00:56:39.268178 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268182 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268187 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268192 | orchestrator | 2025-05-04 00:56:39.268197 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.268202 | orchestrator | Sunday 04 May 2025 00:51:04 +0000 (0:00:00.599) 0:07:36.341 ************ 2025-05-04 00:56:39.268207 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.268212 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268217 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.268222 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268229 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.268234 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268239 | orchestrator | 2025-05-04 00:56:39.268244 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.268249 | orchestrator | Sunday 04 May 2025 00:51:04 +0000 (0:00:00.516) 0:07:36.858 ************ 2025-05-04 00:56:39.268254 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268259 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268264 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268269 | orchestrator | 2025-05-04 00:56:39.268274 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.268279 | orchestrator | Sunday 04 May 2025 00:51:05 +0000 (0:00:00.374) 0:07:37.232 ************ 2025-05-04 00:56:39.268284 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268289 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268294 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268298 | orchestrator | 2025-05-04 00:56:39.268314 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.268319 | orchestrator | Sunday 04 May 2025 00:51:05 +0000 (0:00:00.451) 0:07:37.684 ************ 2025-05-04 00:56:39.268324 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.268329 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268334 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.268339 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268344 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.268349 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268354 | orchestrator | 2025-05-04 00:56:39.268358 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.268363 | orchestrator | Sunday 04 May 2025 00:51:06 +0000 (0:00:00.855) 0:07:38.539 ************ 2025-05-04 00:56:39.268368 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.268373 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268378 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.268383 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268388 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.268393 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268398 | orchestrator | 2025-05-04 00:56:39.268403 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.268408 | orchestrator | Sunday 04 May 2025 00:51:06 +0000 (0:00:00.337) 0:07:38.877 ************ 2025-05-04 00:56:39.268413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.268418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.268423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.268428 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268433 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.268438 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.268443 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.268448 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.268457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.268462 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.268467 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268472 | orchestrator | 2025-05-04 00:56:39.268477 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.268486 | orchestrator | Sunday 04 May 2025 00:51:07 +0000 (0:00:00.724) 0:07:39.602 ************ 2025-05-04 00:56:39.268491 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268496 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268501 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268506 | orchestrator | 2025-05-04 00:56:39.268511 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.268516 | orchestrator | Sunday 04 May 2025 00:51:08 +0000 (0:00:00.855) 0:07:40.457 ************ 2025-05-04 00:56:39.268521 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.268526 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268530 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.268535 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268540 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.268545 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268550 | orchestrator | 2025-05-04 00:56:39.268555 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.268560 | orchestrator | Sunday 04 May 2025 00:51:08 +0000 (0:00:00.591) 0:07:41.048 ************ 2025-05-04 00:56:39.268564 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268569 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268574 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268579 | orchestrator | 2025-05-04 00:56:39.268584 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.268589 | orchestrator | Sunday 04 May 2025 00:51:09 +0000 (0:00:00.866) 0:07:41.914 ************ 2025-05-04 00:56:39.268594 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268599 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268606 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268612 | orchestrator | 2025-05-04 00:56:39.268617 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-04 00:56:39.268622 | orchestrator | Sunday 04 May 2025 00:51:10 +0000 (0:00:00.550) 0:07:42.465 ************ 2025-05-04 00:56:39.268627 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.268632 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.268637 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.268642 | orchestrator | 2025-05-04 00:56:39.268647 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-04 00:56:39.268652 | orchestrator | Sunday 04 May 2025 00:51:10 +0000 (0:00:00.677) 0:07:43.142 ************ 2025-05-04 00:56:39.268659 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-04 00:56:39.268664 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:56:39.268669 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:56:39.268674 | orchestrator | 2025-05-04 00:56:39.268679 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-04 00:56:39.268684 | orchestrator | Sunday 04 May 2025 00:51:11 +0000 (0:00:00.746) 0:07:43.888 ************ 2025-05-04 00:56:39.268700 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.268706 | orchestrator | 2025-05-04 00:56:39.268711 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-04 00:56:39.268716 | orchestrator | Sunday 04 May 2025 00:51:12 +0000 (0:00:00.571) 0:07:44.460 ************ 2025-05-04 00:56:39.268721 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268726 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268731 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268743 | orchestrator | 2025-05-04 00:56:39.268748 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-04 00:56:39.268753 | orchestrator | Sunday 04 May 2025 00:51:12 +0000 (0:00:00.582) 0:07:45.042 ************ 2025-05-04 00:56:39.268777 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268783 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268788 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268793 | orchestrator | 2025-05-04 00:56:39.268798 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-04 00:56:39.268803 | orchestrator | Sunday 04 May 2025 00:51:13 +0000 (0:00:00.317) 0:07:45.360 ************ 2025-05-04 00:56:39.268808 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268812 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268817 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268822 | orchestrator | 2025-05-04 00:56:39.268827 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-04 00:56:39.268832 | orchestrator | Sunday 04 May 2025 00:51:13 +0000 (0:00:00.380) 0:07:45.740 ************ 2025-05-04 00:56:39.268837 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.268842 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.268847 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.268852 | orchestrator | 2025-05-04 00:56:39.268857 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-04 00:56:39.268862 | orchestrator | Sunday 04 May 2025 00:51:13 +0000 (0:00:00.357) 0:07:46.098 ************ 2025-05-04 00:56:39.268867 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.268872 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.268877 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.268882 | orchestrator | 2025-05-04 00:56:39.268887 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-04 00:56:39.268892 | orchestrator | Sunday 04 May 2025 00:51:14 +0000 (0:00:01.012) 0:07:47.111 ************ 2025-05-04 00:56:39.268896 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.268902 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.268907 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.268912 | orchestrator | 2025-05-04 00:56:39.268917 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-04 00:56:39.268922 | orchestrator | Sunday 04 May 2025 00:51:15 +0000 (0:00:00.411) 0:07:47.523 ************ 2025-05-04 00:56:39.268926 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-04 00:56:39.268934 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-04 00:56:39.268939 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-04 00:56:39.268944 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-04 00:56:39.268949 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-04 00:56:39.268954 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-04 00:56:39.268959 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-04 00:56:39.268964 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-04 00:56:39.268969 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-04 00:56:39.268974 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-04 00:56:39.268979 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-04 00:56:39.268983 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-04 00:56:39.268988 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-04 00:56:39.268995 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-04 00:56:39.269000 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-04 00:56:39.269009 | orchestrator | 2025-05-04 00:56:39.269014 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-04 00:56:39.269019 | orchestrator | Sunday 04 May 2025 00:51:18 +0000 (0:00:03.071) 0:07:50.594 ************ 2025-05-04 00:56:39.269024 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269029 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269033 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269038 | orchestrator | 2025-05-04 00:56:39.269043 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-04 00:56:39.269048 | orchestrator | Sunday 04 May 2025 00:51:18 +0000 (0:00:00.555) 0:07:51.150 ************ 2025-05-04 00:56:39.269053 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.269058 | orchestrator | 2025-05-04 00:56:39.269065 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-04 00:56:39.269071 | orchestrator | Sunday 04 May 2025 00:51:19 +0000 (0:00:00.593) 0:07:51.744 ************ 2025-05-04 00:56:39.269075 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-04 00:56:39.269093 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-04 00:56:39.269099 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-04 00:56:39.269104 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-04 00:56:39.269109 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-04 00:56:39.269114 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-04 00:56:39.269122 | orchestrator | 2025-05-04 00:56:39.269127 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-04 00:56:39.269132 | orchestrator | Sunday 04 May 2025 00:51:20 +0000 (0:00:01.079) 0:07:52.823 ************ 2025-05-04 00:56:39.269137 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:56:39.269142 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.269147 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-04 00:56:39.269152 | orchestrator | 2025-05-04 00:56:39.269157 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-04 00:56:39.269162 | orchestrator | Sunday 04 May 2025 00:51:22 +0000 (0:00:02.097) 0:07:54.920 ************ 2025-05-04 00:56:39.269167 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-04 00:56:39.269172 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.269177 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.269184 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-04 00:56:39.269189 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.269194 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.269199 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-04 00:56:39.269204 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.269209 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.269214 | orchestrator | 2025-05-04 00:56:39.269219 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-04 00:56:39.269224 | orchestrator | Sunday 04 May 2025 00:51:24 +0000 (0:00:01.453) 0:07:56.374 ************ 2025-05-04 00:56:39.269229 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.269234 | orchestrator | 2025-05-04 00:56:39.269238 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-04 00:56:39.269244 | orchestrator | Sunday 04 May 2025 00:51:26 +0000 (0:00:01.918) 0:07:58.293 ************ 2025-05-04 00:56:39.269249 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.269253 | orchestrator | 2025-05-04 00:56:39.269258 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-04 00:56:39.269267 | orchestrator | Sunday 04 May 2025 00:51:26 +0000 (0:00:00.816) 0:07:59.109 ************ 2025-05-04 00:56:39.269272 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269277 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269282 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269287 | orchestrator | 2025-05-04 00:56:39.269292 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-04 00:56:39.269297 | orchestrator | Sunday 04 May 2025 00:51:27 +0000 (0:00:00.319) 0:07:59.429 ************ 2025-05-04 00:56:39.269301 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269306 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269312 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269317 | orchestrator | 2025-05-04 00:56:39.269321 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-04 00:56:39.269326 | orchestrator | Sunday 04 May 2025 00:51:27 +0000 (0:00:00.323) 0:07:59.753 ************ 2025-05-04 00:56:39.269331 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269336 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269341 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269346 | orchestrator | 2025-05-04 00:56:39.269351 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-04 00:56:39.269356 | orchestrator | Sunday 04 May 2025 00:51:27 +0000 (0:00:00.301) 0:08:00.054 ************ 2025-05-04 00:56:39.269361 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.269366 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.269371 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.269376 | orchestrator | 2025-05-04 00:56:39.269381 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-04 00:56:39.269386 | orchestrator | Sunday 04 May 2025 00:51:28 +0000 (0:00:00.775) 0:08:00.830 ************ 2025-05-04 00:56:39.269391 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.269396 | orchestrator | 2025-05-04 00:56:39.269401 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-04 00:56:39.269405 | orchestrator | Sunday 04 May 2025 00:51:29 +0000 (0:00:00.637) 0:08:01.467 ************ 2025-05-04 00:56:39.269410 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c91b3cb6-7edb-5452-ada6-d38ce882942b', 'data_vg': 'ceph-c91b3cb6-7edb-5452-ada6-d38ce882942b'}) 2025-05-04 00:56:39.269416 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-03a186d7-e7a2-5e82-b5c3-d5631de29e6f', 'data_vg': 'ceph-03a186d7-e7a2-5e82-b5c3-d5631de29e6f'}) 2025-05-04 00:56:39.269422 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98453abf-c748-514f-aec7-544322a7c940', 'data_vg': 'ceph-98453abf-c748-514f-aec7-544322a7c940'}) 2025-05-04 00:56:39.269427 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bdbd5a24-b46a-5ddb-91ef-7688b352f27d', 'data_vg': 'ceph-bdbd5a24-b46a-5ddb-91ef-7688b352f27d'}) 2025-05-04 00:56:39.269442 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5e087d3a-1c7d-5e62-b576-6c121f884fde', 'data_vg': 'ceph-5e087d3a-1c7d-5e62-b576-6c121f884fde'}) 2025-05-04 00:56:39.269447 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f54bf35c-9381-504c-8591-afe4d3e61469', 'data_vg': 'ceph-f54bf35c-9381-504c-8591-afe4d3e61469'}) 2025-05-04 00:56:39.269452 | orchestrator | 2025-05-04 00:56:39.269458 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-04 00:56:39.269463 | orchestrator | Sunday 04 May 2025 00:52:10 +0000 (0:00:41.251) 0:08:42.719 ************ 2025-05-04 00:56:39.269467 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269472 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269477 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269482 | orchestrator | 2025-05-04 00:56:39.269487 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-04 00:56:39.269495 | orchestrator | Sunday 04 May 2025 00:52:11 +0000 (0:00:00.501) 0:08:43.221 ************ 2025-05-04 00:56:39.269500 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.269505 | orchestrator | 2025-05-04 00:56:39.269510 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-04 00:56:39.269515 | orchestrator | Sunday 04 May 2025 00:52:11 +0000 (0:00:00.554) 0:08:43.775 ************ 2025-05-04 00:56:39.269520 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.269525 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.269530 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.269535 | orchestrator | 2025-05-04 00:56:39.269540 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-04 00:56:39.269545 | orchestrator | Sunday 04 May 2025 00:52:12 +0000 (0:00:00.667) 0:08:44.443 ************ 2025-05-04 00:56:39.269550 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.269555 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.269560 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.269565 | orchestrator | 2025-05-04 00:56:39.269570 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-04 00:56:39.269575 | orchestrator | Sunday 04 May 2025 00:52:14 +0000 (0:00:02.093) 0:08:46.536 ************ 2025-05-04 00:56:39.269579 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.269584 | orchestrator | 2025-05-04 00:56:39.269589 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-04 00:56:39.269594 | orchestrator | Sunday 04 May 2025 00:52:14 +0000 (0:00:00.598) 0:08:47.135 ************ 2025-05-04 00:56:39.269599 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.269607 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.269612 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.269616 | orchestrator | 2025-05-04 00:56:39.269621 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-04 00:56:39.269628 | orchestrator | Sunday 04 May 2025 00:52:16 +0000 (0:00:01.584) 0:08:48.720 ************ 2025-05-04 00:56:39.269634 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.269639 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.269644 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.269648 | orchestrator | 2025-05-04 00:56:39.269653 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-04 00:56:39.269658 | orchestrator | Sunday 04 May 2025 00:52:17 +0000 (0:00:01.211) 0:08:49.931 ************ 2025-05-04 00:56:39.269663 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.269668 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.269673 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.269678 | orchestrator | 2025-05-04 00:56:39.269683 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-04 00:56:39.269688 | orchestrator | Sunday 04 May 2025 00:52:19 +0000 (0:00:01.644) 0:08:51.576 ************ 2025-05-04 00:56:39.269693 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269698 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269703 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269708 | orchestrator | 2025-05-04 00:56:39.269713 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-04 00:56:39.269718 | orchestrator | Sunday 04 May 2025 00:52:19 +0000 (0:00:00.360) 0:08:51.937 ************ 2025-05-04 00:56:39.269723 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269728 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269733 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269737 | orchestrator | 2025-05-04 00:56:39.269742 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-04 00:56:39.269747 | orchestrator | Sunday 04 May 2025 00:52:20 +0000 (0:00:00.649) 0:08:52.586 ************ 2025-05-04 00:56:39.269752 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-04 00:56:39.269760 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-04 00:56:39.269790 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-04 00:56:39.269796 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-04 00:56:39.269801 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-05-04 00:56:39.269806 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-04 00:56:39.269811 | orchestrator | 2025-05-04 00:56:39.269816 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-04 00:56:39.269821 | orchestrator | Sunday 04 May 2025 00:52:21 +0000 (0:00:01.061) 0:08:53.647 ************ 2025-05-04 00:56:39.269826 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-04 00:56:39.269831 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-04 00:56:39.269836 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-04 00:56:39.269841 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-04 00:56:39.269846 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-05-04 00:56:39.269851 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-04 00:56:39.269856 | orchestrator | 2025-05-04 00:56:39.269861 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-04 00:56:39.269879 | orchestrator | Sunday 04 May 2025 00:52:25 +0000 (0:00:03.736) 0:08:57.384 ************ 2025-05-04 00:56:39.269884 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269889 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269894 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.269899 | orchestrator | 2025-05-04 00:56:39.269904 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-04 00:56:39.269909 | orchestrator | Sunday 04 May 2025 00:52:28 +0000 (0:00:02.912) 0:09:00.296 ************ 2025-05-04 00:56:39.269914 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269919 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269924 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-04 00:56:39.269929 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.269934 | orchestrator | 2025-05-04 00:56:39.269939 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-04 00:56:39.269944 | orchestrator | Sunday 04 May 2025 00:52:40 +0000 (0:00:12.360) 0:09:12.656 ************ 2025-05-04 00:56:39.269949 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269954 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269959 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269964 | orchestrator | 2025-05-04 00:56:39.269969 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-04 00:56:39.269974 | orchestrator | Sunday 04 May 2025 00:52:40 +0000 (0:00:00.503) 0:09:13.159 ************ 2025-05-04 00:56:39.269979 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.269984 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.269989 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.269993 | orchestrator | 2025-05-04 00:56:39.269998 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-04 00:56:39.270003 | orchestrator | Sunday 04 May 2025 00:52:42 +0000 (0:00:01.188) 0:09:14.348 ************ 2025-05-04 00:56:39.270008 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.270025 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.270031 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.270036 | orchestrator | 2025-05-04 00:56:39.270041 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-04 00:56:39.270046 | orchestrator | Sunday 04 May 2025 00:52:43 +0000 (0:00:00.983) 0:09:15.331 ************ 2025-05-04 00:56:39.270051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.270056 | orchestrator | 2025-05-04 00:56:39.270061 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-04 00:56:39.270071 | orchestrator | Sunday 04 May 2025 00:52:43 +0000 (0:00:00.598) 0:09:15.930 ************ 2025-05-04 00:56:39.270077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.270082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.270087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.270091 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270096 | orchestrator | 2025-05-04 00:56:39.270101 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-04 00:56:39.270106 | orchestrator | Sunday 04 May 2025 00:52:44 +0000 (0:00:00.442) 0:09:16.373 ************ 2025-05-04 00:56:39.270111 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270116 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270121 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270126 | orchestrator | 2025-05-04 00:56:39.270131 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-04 00:56:39.270136 | orchestrator | Sunday 04 May 2025 00:52:44 +0000 (0:00:00.337) 0:09:16.710 ************ 2025-05-04 00:56:39.270141 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270146 | orchestrator | 2025-05-04 00:56:39.270151 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-04 00:56:39.270156 | orchestrator | Sunday 04 May 2025 00:52:45 +0000 (0:00:01.044) 0:09:17.754 ************ 2025-05-04 00:56:39.270161 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270165 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270170 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270175 | orchestrator | 2025-05-04 00:56:39.270180 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-04 00:56:39.270209 | orchestrator | Sunday 04 May 2025 00:52:45 +0000 (0:00:00.367) 0:09:18.121 ************ 2025-05-04 00:56:39.270214 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270219 | orchestrator | 2025-05-04 00:56:39.270224 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-04 00:56:39.270229 | orchestrator | Sunday 04 May 2025 00:52:46 +0000 (0:00:00.271) 0:09:18.392 ************ 2025-05-04 00:56:39.270234 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270239 | orchestrator | 2025-05-04 00:56:39.270244 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-04 00:56:39.270249 | orchestrator | Sunday 04 May 2025 00:52:46 +0000 (0:00:00.256) 0:09:18.649 ************ 2025-05-04 00:56:39.270254 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270259 | orchestrator | 2025-05-04 00:56:39.270264 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-04 00:56:39.270269 | orchestrator | Sunday 04 May 2025 00:52:46 +0000 (0:00:00.126) 0:09:18.775 ************ 2025-05-04 00:56:39.270274 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270279 | orchestrator | 2025-05-04 00:56:39.270284 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-04 00:56:39.270289 | orchestrator | Sunday 04 May 2025 00:52:46 +0000 (0:00:00.232) 0:09:19.007 ************ 2025-05-04 00:56:39.270294 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270299 | orchestrator | 2025-05-04 00:56:39.270304 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-04 00:56:39.270309 | orchestrator | Sunday 04 May 2025 00:52:47 +0000 (0:00:00.252) 0:09:19.260 ************ 2025-05-04 00:56:39.270314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.270331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.270337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.270342 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270347 | orchestrator | 2025-05-04 00:56:39.270352 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-04 00:56:39.270357 | orchestrator | Sunday 04 May 2025 00:52:47 +0000 (0:00:00.780) 0:09:20.040 ************ 2025-05-04 00:56:39.270365 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270370 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270376 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270381 | orchestrator | 2025-05-04 00:56:39.270385 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-04 00:56:39.270390 | orchestrator | Sunday 04 May 2025 00:52:48 +0000 (0:00:00.684) 0:09:20.725 ************ 2025-05-04 00:56:39.270395 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270403 | orchestrator | 2025-05-04 00:56:39.270408 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-04 00:56:39.270413 | orchestrator | Sunday 04 May 2025 00:52:48 +0000 (0:00:00.281) 0:09:21.007 ************ 2025-05-04 00:56:39.270418 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270423 | orchestrator | 2025-05-04 00:56:39.270428 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.270433 | orchestrator | Sunday 04 May 2025 00:52:49 +0000 (0:00:00.264) 0:09:21.272 ************ 2025-05-04 00:56:39.270438 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.270443 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.270448 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.270453 | orchestrator | 2025-05-04 00:56:39.270457 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-04 00:56:39.270462 | orchestrator | 2025-05-04 00:56:39.270467 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.270472 | orchestrator | Sunday 04 May 2025 00:52:52 +0000 (0:00:03.171) 0:09:24.444 ************ 2025-05-04 00:56:39.270477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.270483 | orchestrator | 2025-05-04 00:56:39.270488 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.270493 | orchestrator | Sunday 04 May 2025 00:52:53 +0000 (0:00:01.369) 0:09:25.813 ************ 2025-05-04 00:56:39.270497 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270502 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.270507 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270512 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.270517 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.270522 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270527 | orchestrator | 2025-05-04 00:56:39.270532 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.270537 | orchestrator | Sunday 04 May 2025 00:52:54 +0000 (0:00:00.984) 0:09:26.798 ************ 2025-05-04 00:56:39.270542 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270547 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270552 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270557 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.270562 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.270566 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.270571 | orchestrator | 2025-05-04 00:56:39.270576 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.270581 | orchestrator | Sunday 04 May 2025 00:52:55 +0000 (0:00:01.105) 0:09:27.904 ************ 2025-05-04 00:56:39.270586 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270591 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270596 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270601 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.270606 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.270611 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.270616 | orchestrator | 2025-05-04 00:56:39.270620 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.270625 | orchestrator | Sunday 04 May 2025 00:52:57 +0000 (0:00:01.355) 0:09:29.259 ************ 2025-05-04 00:56:39.270630 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270638 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270643 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270648 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.270653 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.270658 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.270663 | orchestrator | 2025-05-04 00:56:39.270668 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.270675 | orchestrator | Sunday 04 May 2025 00:52:58 +0000 (0:00:01.076) 0:09:30.336 ************ 2025-05-04 00:56:39.270680 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270685 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.270690 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.270695 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.270700 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270704 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270709 | orchestrator | 2025-05-04 00:56:39.270714 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.270719 | orchestrator | Sunday 04 May 2025 00:52:59 +0000 (0:00:00.936) 0:09:31.272 ************ 2025-05-04 00:56:39.270724 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270729 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270734 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270739 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270746 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270754 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270759 | orchestrator | 2025-05-04 00:56:39.270776 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.270786 | orchestrator | Sunday 04 May 2025 00:52:59 +0000 (0:00:00.640) 0:09:31.912 ************ 2025-05-04 00:56:39.270791 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270796 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270801 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270806 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270824 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270830 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270835 | orchestrator | 2025-05-04 00:56:39.270840 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.270845 | orchestrator | Sunday 04 May 2025 00:53:00 +0000 (0:00:01.060) 0:09:32.973 ************ 2025-05-04 00:56:39.270850 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270855 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270860 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270865 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270870 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270878 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270883 | orchestrator | 2025-05-04 00:56:39.270888 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.270893 | orchestrator | Sunday 04 May 2025 00:53:01 +0000 (0:00:00.771) 0:09:33.745 ************ 2025-05-04 00:56:39.270898 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270903 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270908 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270913 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270918 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270923 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270928 | orchestrator | 2025-05-04 00:56:39.270933 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.270937 | orchestrator | Sunday 04 May 2025 00:53:02 +0000 (0:00:01.008) 0:09:34.753 ************ 2025-05-04 00:56:39.270942 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.270947 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.270952 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.270957 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.270965 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.270971 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.270976 | orchestrator | 2025-05-04 00:56:39.270981 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.270986 | orchestrator | Sunday 04 May 2025 00:53:03 +0000 (0:00:00.655) 0:09:35.408 ************ 2025-05-04 00:56:39.270991 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.270995 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.271000 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.271005 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.271010 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.271015 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.271020 | orchestrator | 2025-05-04 00:56:39.271025 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.271030 | orchestrator | Sunday 04 May 2025 00:53:04 +0000 (0:00:01.397) 0:09:36.806 ************ 2025-05-04 00:56:39.271035 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271040 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271045 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271050 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271055 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271060 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271065 | orchestrator | 2025-05-04 00:56:39.271070 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.271075 | orchestrator | Sunday 04 May 2025 00:53:05 +0000 (0:00:00.646) 0:09:37.452 ************ 2025-05-04 00:56:39.271080 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.271084 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.271089 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.271094 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271099 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271104 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271109 | orchestrator | 2025-05-04 00:56:39.271114 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.271119 | orchestrator | Sunday 04 May 2025 00:53:06 +0000 (0:00:01.017) 0:09:38.470 ************ 2025-05-04 00:56:39.271124 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271129 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271133 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271138 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.271143 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.271148 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.271153 | orchestrator | 2025-05-04 00:56:39.271158 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.271163 | orchestrator | Sunday 04 May 2025 00:53:06 +0000 (0:00:00.631) 0:09:39.101 ************ 2025-05-04 00:56:39.271168 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271173 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271178 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271183 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.271187 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.271192 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.271197 | orchestrator | 2025-05-04 00:56:39.271202 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.271207 | orchestrator | Sunday 04 May 2025 00:53:07 +0000 (0:00:00.926) 0:09:40.028 ************ 2025-05-04 00:56:39.271212 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271217 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271222 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271227 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.271232 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.271237 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.271241 | orchestrator | 2025-05-04 00:56:39.271246 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.271251 | orchestrator | Sunday 04 May 2025 00:53:08 +0000 (0:00:00.700) 0:09:40.728 ************ 2025-05-04 00:56:39.271259 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271264 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271269 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271277 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271283 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271288 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271293 | orchestrator | 2025-05-04 00:56:39.271297 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.271302 | orchestrator | Sunday 04 May 2025 00:53:09 +0000 (0:00:00.925) 0:09:41.654 ************ 2025-05-04 00:56:39.271307 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271323 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271329 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271334 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271339 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271344 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271349 | orchestrator | 2025-05-04 00:56:39.271354 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.271359 | orchestrator | Sunday 04 May 2025 00:53:10 +0000 (0:00:00.641) 0:09:42.295 ************ 2025-05-04 00:56:39.271364 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.271369 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.271374 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.271379 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271384 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271389 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271394 | orchestrator | 2025-05-04 00:56:39.271399 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.271404 | orchestrator | Sunday 04 May 2025 00:53:11 +0000 (0:00:00.958) 0:09:43.254 ************ 2025-05-04 00:56:39.271408 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.271413 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.271418 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.271423 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.271428 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.271433 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.271437 | orchestrator | 2025-05-04 00:56:39.271442 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.271450 | orchestrator | Sunday 04 May 2025 00:53:11 +0000 (0:00:00.704) 0:09:43.959 ************ 2025-05-04 00:56:39.271455 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271460 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271465 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271470 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271475 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271480 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271485 | orchestrator | 2025-05-04 00:56:39.271490 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.271495 | orchestrator | Sunday 04 May 2025 00:53:12 +0000 (0:00:01.097) 0:09:45.056 ************ 2025-05-04 00:56:39.271499 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271504 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271509 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271514 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271519 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271524 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271529 | orchestrator | 2025-05-04 00:56:39.271534 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.271539 | orchestrator | Sunday 04 May 2025 00:53:13 +0000 (0:00:00.673) 0:09:45.730 ************ 2025-05-04 00:56:39.271544 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271549 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271554 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271562 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271567 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271572 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271577 | orchestrator | 2025-05-04 00:56:39.271582 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.271586 | orchestrator | Sunday 04 May 2025 00:53:14 +0000 (0:00:00.976) 0:09:46.706 ************ 2025-05-04 00:56:39.271591 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271596 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271601 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271606 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271611 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271616 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271621 | orchestrator | 2025-05-04 00:56:39.271626 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.271631 | orchestrator | Sunday 04 May 2025 00:53:15 +0000 (0:00:00.807) 0:09:47.514 ************ 2025-05-04 00:56:39.271636 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271641 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271645 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271651 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271656 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271663 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271668 | orchestrator | 2025-05-04 00:56:39.271673 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.271678 | orchestrator | Sunday 04 May 2025 00:53:16 +0000 (0:00:00.912) 0:09:48.426 ************ 2025-05-04 00:56:39.271683 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271688 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271693 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271698 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271703 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271708 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271712 | orchestrator | 2025-05-04 00:56:39.271717 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.271722 | orchestrator | Sunday 04 May 2025 00:53:16 +0000 (0:00:00.640) 0:09:49.067 ************ 2025-05-04 00:56:39.271727 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271732 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271737 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271742 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271747 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271752 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271757 | orchestrator | 2025-05-04 00:56:39.271762 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.271781 | orchestrator | Sunday 04 May 2025 00:53:17 +0000 (0:00:00.937) 0:09:50.005 ************ 2025-05-04 00:56:39.271786 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271791 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271796 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271801 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271806 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271811 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271816 | orchestrator | 2025-05-04 00:56:39.271834 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.271840 | orchestrator | Sunday 04 May 2025 00:53:18 +0000 (0:00:00.691) 0:09:50.697 ************ 2025-05-04 00:56:39.271846 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271851 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271856 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271861 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271866 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271874 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271879 | orchestrator | 2025-05-04 00:56:39.271884 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.271889 | orchestrator | Sunday 04 May 2025 00:53:19 +0000 (0:00:00.931) 0:09:51.628 ************ 2025-05-04 00:56:39.271894 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271899 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271904 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271909 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271914 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271919 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271924 | orchestrator | 2025-05-04 00:56:39.271929 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.271934 | orchestrator | Sunday 04 May 2025 00:53:20 +0000 (0:00:00.654) 0:09:52.283 ************ 2025-05-04 00:56:39.271939 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271944 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271948 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271953 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.271958 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.271963 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.271968 | orchestrator | 2025-05-04 00:56:39.271973 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.271978 | orchestrator | Sunday 04 May 2025 00:53:21 +0000 (0:00:00.933) 0:09:53.216 ************ 2025-05-04 00:56:39.271983 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.271988 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.271992 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.271997 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272002 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272007 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272012 | orchestrator | 2025-05-04 00:56:39.272017 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.272022 | orchestrator | Sunday 04 May 2025 00:53:21 +0000 (0:00:00.655) 0:09:53.872 ************ 2025-05-04 00:56:39.272027 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.272032 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-04 00:56:39.272037 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272042 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.272047 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-04 00:56:39.272052 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272057 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.272062 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-04 00:56:39.272066 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272071 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.272076 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.272081 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272086 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.272091 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.272096 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272101 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.272106 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.272111 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272116 | orchestrator | 2025-05-04 00:56:39.272120 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.272125 | orchestrator | Sunday 04 May 2025 00:53:22 +0000 (0:00:01.075) 0:09:54.947 ************ 2025-05-04 00:56:39.272130 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-04 00:56:39.272139 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-04 00:56:39.272147 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272152 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-04 00:56:39.272157 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-04 00:56:39.272162 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272170 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-04 00:56:39.272175 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-04 00:56:39.272179 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272184 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-04 00:56:39.272189 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-04 00:56:39.272194 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272199 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-04 00:56:39.272204 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-04 00:56:39.272208 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272213 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-04 00:56:39.272218 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-04 00:56:39.272223 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272228 | orchestrator | 2025-05-04 00:56:39.272233 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.272238 | orchestrator | Sunday 04 May 2025 00:53:23 +0000 (0:00:00.875) 0:09:55.823 ************ 2025-05-04 00:56:39.272243 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272247 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272252 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272257 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272273 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272279 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272284 | orchestrator | 2025-05-04 00:56:39.272289 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.272294 | orchestrator | Sunday 04 May 2025 00:53:24 +0000 (0:00:00.956) 0:09:56.779 ************ 2025-05-04 00:56:39.272299 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272304 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272308 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272313 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272318 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272323 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272328 | orchestrator | 2025-05-04 00:56:39.272333 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.272338 | orchestrator | Sunday 04 May 2025 00:53:25 +0000 (0:00:00.747) 0:09:57.526 ************ 2025-05-04 00:56:39.272343 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272348 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272353 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272358 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272363 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272368 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272373 | orchestrator | 2025-05-04 00:56:39.272378 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.272383 | orchestrator | Sunday 04 May 2025 00:53:26 +0000 (0:00:01.022) 0:09:58.549 ************ 2025-05-04 00:56:39.272387 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272392 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272397 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272402 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272407 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272412 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272420 | orchestrator | 2025-05-04 00:56:39.272425 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.272430 | orchestrator | Sunday 04 May 2025 00:53:27 +0000 (0:00:00.668) 0:09:59.217 ************ 2025-05-04 00:56:39.272435 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272440 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272445 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272450 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272455 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272460 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272465 | orchestrator | 2025-05-04 00:56:39.272472 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.272477 | orchestrator | Sunday 04 May 2025 00:53:27 +0000 (0:00:00.948) 0:10:00.166 ************ 2025-05-04 00:56:39.272482 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272487 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272492 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272496 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272501 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272506 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272511 | orchestrator | 2025-05-04 00:56:39.272516 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.272521 | orchestrator | Sunday 04 May 2025 00:53:28 +0000 (0:00:00.615) 0:10:00.782 ************ 2025-05-04 00:56:39.272526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.272531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.272536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.272541 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272546 | orchestrator | 2025-05-04 00:56:39.272551 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.272556 | orchestrator | Sunday 04 May 2025 00:53:29 +0000 (0:00:00.724) 0:10:01.506 ************ 2025-05-04 00:56:39.272561 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.272566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.272571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.272575 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272580 | orchestrator | 2025-05-04 00:56:39.272585 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.272590 | orchestrator | Sunday 04 May 2025 00:53:30 +0000 (0:00:00.957) 0:10:02.464 ************ 2025-05-04 00:56:39.272595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.272600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.272605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.272610 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272615 | orchestrator | 2025-05-04 00:56:39.272620 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.272624 | orchestrator | Sunday 04 May 2025 00:53:30 +0000 (0:00:00.434) 0:10:02.899 ************ 2025-05-04 00:56:39.272629 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272634 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272639 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272644 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272649 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272653 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272658 | orchestrator | 2025-05-04 00:56:39.272663 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.272668 | orchestrator | Sunday 04 May 2025 00:53:31 +0000 (0:00:00.717) 0:10:03.616 ************ 2025-05-04 00:56:39.272673 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.272678 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272686 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.272691 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272698 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.272703 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272720 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.272725 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272730 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.272735 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272740 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.272745 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272750 | orchestrator | 2025-05-04 00:56:39.272755 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.272760 | orchestrator | Sunday 04 May 2025 00:53:32 +0000 (0:00:01.121) 0:10:04.738 ************ 2025-05-04 00:56:39.272779 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272784 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272789 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272794 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272799 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272804 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272809 | orchestrator | 2025-05-04 00:56:39.272814 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.272819 | orchestrator | Sunday 04 May 2025 00:53:33 +0000 (0:00:00.727) 0:10:05.466 ************ 2025-05-04 00:56:39.272823 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272828 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272833 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272838 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272843 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272848 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272853 | orchestrator | 2025-05-04 00:56:39.272858 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.272863 | orchestrator | Sunday 04 May 2025 00:53:34 +0000 (0:00:01.062) 0:10:06.529 ************ 2025-05-04 00:56:39.272868 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-04 00:56:39.272873 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272877 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-04 00:56:39.272882 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272887 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-04 00:56:39.272892 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272897 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.272902 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272906 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.272911 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272916 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.272921 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272926 | orchestrator | 2025-05-04 00:56:39.272931 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.272936 | orchestrator | Sunday 04 May 2025 00:53:35 +0000 (0:00:00.921) 0:10:07.451 ************ 2025-05-04 00:56:39.272941 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.272945 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.272950 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.272955 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.272960 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.272965 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.272970 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.272978 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.272983 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.272988 | orchestrator | 2025-05-04 00:56:39.272993 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.272998 | orchestrator | Sunday 04 May 2025 00:53:36 +0000 (0:00:01.067) 0:10:08.519 ************ 2025-05-04 00:56:39.273003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-04 00:56:39.273008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-04 00:56:39.273013 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-04 00:56:39.273018 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.273023 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-04 00:56:39.273028 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-04 00:56:39.273032 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-04 00:56:39.273037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-04 00:56:39.273042 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-04 00:56:39.273047 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-04 00:56:39.273052 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.273057 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.273062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.273066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.273071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.273076 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.273081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.273086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.273091 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273096 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273101 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.273108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.273113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.273118 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273124 | orchestrator | 2025-05-04 00:56:39.273130 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.273135 | orchestrator | Sunday 04 May 2025 00:53:37 +0000 (0:00:01.672) 0:10:10.191 ************ 2025-05-04 00:56:39.273140 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.273145 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.273149 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.273154 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273159 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273164 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273169 | orchestrator | 2025-05-04 00:56:39.273174 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.273179 | orchestrator | Sunday 04 May 2025 00:53:39 +0000 (0:00:01.529) 0:10:11.720 ************ 2025-05-04 00:56:39.273184 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.273188 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.273193 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.273198 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.273203 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273208 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.273213 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273218 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.273225 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273230 | orchestrator | 2025-05-04 00:56:39.273235 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.273241 | orchestrator | Sunday 04 May 2025 00:53:40 +0000 (0:00:01.362) 0:10:13.083 ************ 2025-05-04 00:56:39.273246 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.273250 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.273255 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.273260 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273265 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273270 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273277 | orchestrator | 2025-05-04 00:56:39.273283 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.273288 | orchestrator | Sunday 04 May 2025 00:53:42 +0000 (0:00:01.210) 0:10:14.294 ************ 2025-05-04 00:56:39.273293 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:56:39.273298 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:56:39.273303 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:56:39.273308 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273313 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273318 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273322 | orchestrator | 2025-05-04 00:56:39.273327 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-04 00:56:39.273332 | orchestrator | Sunday 04 May 2025 00:53:43 +0000 (0:00:01.270) 0:10:15.564 ************ 2025-05-04 00:56:39.273338 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.273342 | orchestrator | 2025-05-04 00:56:39.273349 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-04 00:56:39.273355 | orchestrator | Sunday 04 May 2025 00:53:46 +0000 (0:00:03.171) 0:10:18.736 ************ 2025-05-04 00:56:39.273360 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.273365 | orchestrator | 2025-05-04 00:56:39.273369 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-04 00:56:39.273374 | orchestrator | Sunday 04 May 2025 00:53:48 +0000 (0:00:01.677) 0:10:20.414 ************ 2025-05-04 00:56:39.273379 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.273384 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.273389 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.273394 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.273399 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.273404 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.273409 | orchestrator | 2025-05-04 00:56:39.273414 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-04 00:56:39.273419 | orchestrator | Sunday 04 May 2025 00:53:50 +0000 (0:00:01.949) 0:10:22.364 ************ 2025-05-04 00:56:39.273424 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.273429 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.273433 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.273438 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.273445 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.273450 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.273455 | orchestrator | 2025-05-04 00:56:39.273460 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-04 00:56:39.273465 | orchestrator | Sunday 04 May 2025 00:53:51 +0000 (0:00:01.155) 0:10:23.519 ************ 2025-05-04 00:56:39.273470 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.273476 | orchestrator | 2025-05-04 00:56:39.273481 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-04 00:56:39.273486 | orchestrator | Sunday 04 May 2025 00:53:52 +0000 (0:00:01.602) 0:10:25.121 ************ 2025-05-04 00:56:39.273491 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.273496 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.273504 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.273509 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.273514 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.273519 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.273524 | orchestrator | 2025-05-04 00:56:39.273529 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-04 00:56:39.273534 | orchestrator | Sunday 04 May 2025 00:53:55 +0000 (0:00:02.338) 0:10:27.460 ************ 2025-05-04 00:56:39.273539 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.273543 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.273548 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.273553 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.273558 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.273563 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.273568 | orchestrator | 2025-05-04 00:56:39.273572 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-04 00:56:39.273580 | orchestrator | Sunday 04 May 2025 00:53:59 +0000 (0:00:04.037) 0:10:31.497 ************ 2025-05-04 00:56:39.273586 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.273591 | orchestrator | 2025-05-04 00:56:39.273596 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-04 00:56:39.273601 | orchestrator | Sunday 04 May 2025 00:54:00 +0000 (0:00:01.190) 0:10:32.688 ************ 2025-05-04 00:56:39.273606 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.273611 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.273616 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.273621 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.273625 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.273630 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.273635 | orchestrator | 2025-05-04 00:56:39.273640 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-04 00:56:39.273645 | orchestrator | Sunday 04 May 2025 00:54:01 +0000 (0:00:00.812) 0:10:33.501 ************ 2025-05-04 00:56:39.273650 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:56:39.273655 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.273661 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:56:39.273668 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.273675 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.273680 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:56:39.273685 | orchestrator | 2025-05-04 00:56:39.273690 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-04 00:56:39.273695 | orchestrator | Sunday 04 May 2025 00:54:03 +0000 (0:00:02.473) 0:10:35.974 ************ 2025-05-04 00:56:39.273700 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:56:39.273705 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:56:39.273710 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:56:39.273715 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.273720 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.273724 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.273732 | orchestrator | 2025-05-04 00:56:39.273737 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-04 00:56:39.273742 | orchestrator | 2025-05-04 00:56:39.273747 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.273752 | orchestrator | Sunday 04 May 2025 00:54:07 +0000 (0:00:03.481) 0:10:39.456 ************ 2025-05-04 00:56:39.273757 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.273792 | orchestrator | 2025-05-04 00:56:39.273798 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.273803 | orchestrator | Sunday 04 May 2025 00:54:08 +0000 (0:00:01.157) 0:10:40.613 ************ 2025-05-04 00:56:39.273808 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273817 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273822 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273827 | orchestrator | 2025-05-04 00:56:39.273832 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.273837 | orchestrator | Sunday 04 May 2025 00:54:08 +0000 (0:00:00.529) 0:10:41.143 ************ 2025-05-04 00:56:39.273842 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.273847 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.273852 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.273857 | orchestrator | 2025-05-04 00:56:39.273862 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.273867 | orchestrator | Sunday 04 May 2025 00:54:09 +0000 (0:00:00.791) 0:10:41.934 ************ 2025-05-04 00:56:39.273872 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.273877 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.273882 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.273887 | orchestrator | 2025-05-04 00:56:39.273892 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.273897 | orchestrator | Sunday 04 May 2025 00:54:10 +0000 (0:00:01.159) 0:10:43.094 ************ 2025-05-04 00:56:39.273902 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.273907 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.273911 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.273916 | orchestrator | 2025-05-04 00:56:39.273924 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.273929 | orchestrator | Sunday 04 May 2025 00:54:11 +0000 (0:00:00.737) 0:10:43.832 ************ 2025-05-04 00:56:39.273934 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273939 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273944 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273949 | orchestrator | 2025-05-04 00:56:39.273954 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.273958 | orchestrator | Sunday 04 May 2025 00:54:12 +0000 (0:00:00.447) 0:10:44.280 ************ 2025-05-04 00:56:39.273964 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273968 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.273973 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.273978 | orchestrator | 2025-05-04 00:56:39.273983 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.273988 | orchestrator | Sunday 04 May 2025 00:54:12 +0000 (0:00:00.333) 0:10:44.613 ************ 2025-05-04 00:56:39.273993 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.273998 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274003 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274008 | orchestrator | 2025-05-04 00:56:39.274038 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.274044 | orchestrator | Sunday 04 May 2025 00:54:13 +0000 (0:00:00.656) 0:10:45.270 ************ 2025-05-04 00:56:39.274049 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274054 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274059 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274064 | orchestrator | 2025-05-04 00:56:39.274069 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.274074 | orchestrator | Sunday 04 May 2025 00:54:13 +0000 (0:00:00.340) 0:10:45.610 ************ 2025-05-04 00:56:39.274079 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274087 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274093 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274097 | orchestrator | 2025-05-04 00:56:39.274102 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.274108 | orchestrator | Sunday 04 May 2025 00:54:13 +0000 (0:00:00.331) 0:10:45.941 ************ 2025-05-04 00:56:39.274112 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274117 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274126 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274131 | orchestrator | 2025-05-04 00:56:39.274136 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.274141 | orchestrator | Sunday 04 May 2025 00:54:14 +0000 (0:00:00.368) 0:10:46.310 ************ 2025-05-04 00:56:39.274146 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.274151 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.274156 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.274161 | orchestrator | 2025-05-04 00:56:39.274165 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.274170 | orchestrator | Sunday 04 May 2025 00:54:15 +0000 (0:00:01.285) 0:10:47.596 ************ 2025-05-04 00:56:39.274175 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274180 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274185 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274190 | orchestrator | 2025-05-04 00:56:39.274195 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.274200 | orchestrator | Sunday 04 May 2025 00:54:15 +0000 (0:00:00.330) 0:10:47.926 ************ 2025-05-04 00:56:39.274205 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274210 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274214 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274219 | orchestrator | 2025-05-04 00:56:39.274224 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.274229 | orchestrator | Sunday 04 May 2025 00:54:16 +0000 (0:00:00.403) 0:10:48.329 ************ 2025-05-04 00:56:39.274234 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.274239 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.274244 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.274249 | orchestrator | 2025-05-04 00:56:39.274254 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.274259 | orchestrator | Sunday 04 May 2025 00:54:16 +0000 (0:00:00.364) 0:10:48.694 ************ 2025-05-04 00:56:39.274264 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.274268 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.274273 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.274278 | orchestrator | 2025-05-04 00:56:39.274283 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.274288 | orchestrator | Sunday 04 May 2025 00:54:17 +0000 (0:00:00.772) 0:10:49.466 ************ 2025-05-04 00:56:39.274293 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.274298 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.274303 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.274308 | orchestrator | 2025-05-04 00:56:39.274313 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.274318 | orchestrator | Sunday 04 May 2025 00:54:17 +0000 (0:00:00.371) 0:10:49.837 ************ 2025-05-04 00:56:39.274322 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274327 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274332 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274353 | orchestrator | 2025-05-04 00:56:39.274358 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.274363 | orchestrator | Sunday 04 May 2025 00:54:17 +0000 (0:00:00.329) 0:10:50.166 ************ 2025-05-04 00:56:39.274368 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274373 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274378 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274383 | orchestrator | 2025-05-04 00:56:39.274388 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.274393 | orchestrator | Sunday 04 May 2025 00:54:18 +0000 (0:00:00.324) 0:10:50.491 ************ 2025-05-04 00:56:39.274398 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274403 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274407 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274412 | orchestrator | 2025-05-04 00:56:39.274420 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.274425 | orchestrator | Sunday 04 May 2025 00:54:18 +0000 (0:00:00.623) 0:10:51.114 ************ 2025-05-04 00:56:39.274430 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.274435 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.274440 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.274445 | orchestrator | 2025-05-04 00:56:39.274452 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.274457 | orchestrator | Sunday 04 May 2025 00:54:19 +0000 (0:00:00.389) 0:10:51.504 ************ 2025-05-04 00:56:39.274462 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274467 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274472 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274477 | orchestrator | 2025-05-04 00:56:39.274482 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.274487 | orchestrator | Sunday 04 May 2025 00:54:19 +0000 (0:00:00.332) 0:10:51.837 ************ 2025-05-04 00:56:39.274492 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274497 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274501 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274506 | orchestrator | 2025-05-04 00:56:39.274511 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.274516 | orchestrator | Sunday 04 May 2025 00:54:19 +0000 (0:00:00.362) 0:10:52.199 ************ 2025-05-04 00:56:39.274521 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274526 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274531 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274535 | orchestrator | 2025-05-04 00:56:39.274540 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.274545 | orchestrator | Sunday 04 May 2025 00:54:20 +0000 (0:00:00.746) 0:10:52.946 ************ 2025-05-04 00:56:39.274550 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274555 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274562 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274567 | orchestrator | 2025-05-04 00:56:39.274572 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.274577 | orchestrator | Sunday 04 May 2025 00:54:21 +0000 (0:00:00.356) 0:10:53.302 ************ 2025-05-04 00:56:39.274582 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274587 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274592 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274597 | orchestrator | 2025-05-04 00:56:39.274602 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.274606 | orchestrator | Sunday 04 May 2025 00:54:21 +0000 (0:00:00.366) 0:10:53.669 ************ 2025-05-04 00:56:39.274611 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274616 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274621 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274626 | orchestrator | 2025-05-04 00:56:39.274631 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.274636 | orchestrator | Sunday 04 May 2025 00:54:21 +0000 (0:00:00.338) 0:10:54.008 ************ 2025-05-04 00:56:39.274641 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274645 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274650 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274655 | orchestrator | 2025-05-04 00:56:39.274660 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.274666 | orchestrator | Sunday 04 May 2025 00:54:22 +0000 (0:00:00.731) 0:10:54.739 ************ 2025-05-04 00:56:39.274671 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274676 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274680 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274685 | orchestrator | 2025-05-04 00:56:39.274690 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.274700 | orchestrator | Sunday 04 May 2025 00:54:22 +0000 (0:00:00.380) 0:10:55.120 ************ 2025-05-04 00:56:39.274706 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274710 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274715 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274720 | orchestrator | 2025-05-04 00:56:39.274725 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.274730 | orchestrator | Sunday 04 May 2025 00:54:23 +0000 (0:00:00.344) 0:10:55.464 ************ 2025-05-04 00:56:39.274735 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274740 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274745 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274750 | orchestrator | 2025-05-04 00:56:39.274755 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.274760 | orchestrator | Sunday 04 May 2025 00:54:23 +0000 (0:00:00.362) 0:10:55.827 ************ 2025-05-04 00:56:39.274779 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274787 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274795 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274803 | orchestrator | 2025-05-04 00:56:39.274810 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.274818 | orchestrator | Sunday 04 May 2025 00:54:24 +0000 (0:00:00.740) 0:10:56.568 ************ 2025-05-04 00:56:39.274826 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274834 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274843 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274852 | orchestrator | 2025-05-04 00:56:39.274858 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.274863 | orchestrator | Sunday 04 May 2025 00:54:24 +0000 (0:00:00.350) 0:10:56.918 ************ 2025-05-04 00:56:39.274868 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.274873 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.274878 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274883 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.274888 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.274892 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274897 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.274913 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.274918 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274923 | orchestrator | 2025-05-04 00:56:39.274928 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.274933 | orchestrator | Sunday 04 May 2025 00:54:25 +0000 (0:00:00.403) 0:10:57.322 ************ 2025-05-04 00:56:39.274938 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-04 00:56:39.274943 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-04 00:56:39.274948 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.274953 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-04 00:56:39.274958 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-04 00:56:39.274962 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.274967 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-04 00:56:39.274972 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-04 00:56:39.274977 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.274985 | orchestrator | 2025-05-04 00:56:39.274990 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.274995 | orchestrator | Sunday 04 May 2025 00:54:25 +0000 (0:00:00.378) 0:10:57.701 ************ 2025-05-04 00:56:39.275000 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275005 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275019 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275024 | orchestrator | 2025-05-04 00:56:39.275029 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.275034 | orchestrator | Sunday 04 May 2025 00:54:26 +0000 (0:00:00.678) 0:10:58.380 ************ 2025-05-04 00:56:39.275039 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275047 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275052 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275057 | orchestrator | 2025-05-04 00:56:39.275062 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.275067 | orchestrator | Sunday 04 May 2025 00:54:26 +0000 (0:00:00.361) 0:10:58.741 ************ 2025-05-04 00:56:39.275072 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275077 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275082 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275087 | orchestrator | 2025-05-04 00:56:39.275092 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.275097 | orchestrator | Sunday 04 May 2025 00:54:26 +0000 (0:00:00.341) 0:10:59.083 ************ 2025-05-04 00:56:39.275102 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275106 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275111 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275116 | orchestrator | 2025-05-04 00:56:39.275121 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.275129 | orchestrator | Sunday 04 May 2025 00:54:27 +0000 (0:00:00.352) 0:10:59.435 ************ 2025-05-04 00:56:39.275134 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275139 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275143 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275148 | orchestrator | 2025-05-04 00:56:39.275153 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.275158 | orchestrator | Sunday 04 May 2025 00:54:27 +0000 (0:00:00.688) 0:11:00.123 ************ 2025-05-04 00:56:39.275163 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275168 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275173 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275177 | orchestrator | 2025-05-04 00:56:39.275182 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.275187 | orchestrator | Sunday 04 May 2025 00:54:28 +0000 (0:00:00.359) 0:11:00.482 ************ 2025-05-04 00:56:39.275192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.275197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.275202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.275207 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275212 | orchestrator | 2025-05-04 00:56:39.275217 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.275222 | orchestrator | Sunday 04 May 2025 00:54:28 +0000 (0:00:00.453) 0:11:00.936 ************ 2025-05-04 00:56:39.275227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.275232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.275237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.275242 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275247 | orchestrator | 2025-05-04 00:56:39.275252 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.275257 | orchestrator | Sunday 04 May 2025 00:54:29 +0000 (0:00:00.463) 0:11:01.399 ************ 2025-05-04 00:56:39.275262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.275266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.275271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.275279 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275284 | orchestrator | 2025-05-04 00:56:39.275289 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.275294 | orchestrator | Sunday 04 May 2025 00:54:29 +0000 (0:00:00.422) 0:11:01.822 ************ 2025-05-04 00:56:39.275299 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275304 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275309 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275314 | orchestrator | 2025-05-04 00:56:39.275319 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.275323 | orchestrator | Sunday 04 May 2025 00:54:30 +0000 (0:00:00.761) 0:11:02.583 ************ 2025-05-04 00:56:39.275328 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.275333 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275338 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.275343 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275348 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.275353 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275358 | orchestrator | 2025-05-04 00:56:39.275363 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.275367 | orchestrator | Sunday 04 May 2025 00:54:30 +0000 (0:00:00.501) 0:11:03.084 ************ 2025-05-04 00:56:39.275372 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275377 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275382 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275387 | orchestrator | 2025-05-04 00:56:39.275392 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.275397 | orchestrator | Sunday 04 May 2025 00:54:31 +0000 (0:00:00.399) 0:11:03.484 ************ 2025-05-04 00:56:39.275401 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275406 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275411 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275416 | orchestrator | 2025-05-04 00:56:39.275421 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.275426 | orchestrator | Sunday 04 May 2025 00:54:31 +0000 (0:00:00.384) 0:11:03.868 ************ 2025-05-04 00:56:39.275431 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.275436 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275441 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.275446 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275451 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.275455 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275460 | orchestrator | 2025-05-04 00:56:39.275467 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.275472 | orchestrator | Sunday 04 May 2025 00:54:33 +0000 (0:00:01.337) 0:11:05.206 ************ 2025-05-04 00:56:39.275478 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.275483 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275488 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.275493 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275498 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.275503 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275508 | orchestrator | 2025-05-04 00:56:39.275513 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.275518 | orchestrator | Sunday 04 May 2025 00:54:33 +0000 (0:00:00.370) 0:11:05.577 ************ 2025-05-04 00:56:39.275523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.275531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.275535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.275540 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.275550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.275555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.275560 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275565 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.275570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.275574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.275579 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275584 | orchestrator | 2025-05-04 00:56:39.275589 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.275594 | orchestrator | Sunday 04 May 2025 00:54:34 +0000 (0:00:00.683) 0:11:06.260 ************ 2025-05-04 00:56:39.275599 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275604 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275609 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275614 | orchestrator | 2025-05-04 00:56:39.275619 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.275623 | orchestrator | Sunday 04 May 2025 00:54:35 +0000 (0:00:00.952) 0:11:07.212 ************ 2025-05-04 00:56:39.275628 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.275633 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275638 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.275643 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275648 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.275653 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275658 | orchestrator | 2025-05-04 00:56:39.275663 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.275667 | orchestrator | Sunday 04 May 2025 00:54:35 +0000 (0:00:00.594) 0:11:07.807 ************ 2025-05-04 00:56:39.275672 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275677 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275682 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275687 | orchestrator | 2025-05-04 00:56:39.275692 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.275697 | orchestrator | Sunday 04 May 2025 00:54:36 +0000 (0:00:00.986) 0:11:08.794 ************ 2025-05-04 00:56:39.275702 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275707 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275712 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275717 | orchestrator | 2025-05-04 00:56:39.275722 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-04 00:56:39.275729 | orchestrator | Sunday 04 May 2025 00:54:37 +0000 (0:00:00.586) 0:11:09.380 ************ 2025-05-04 00:56:39.275734 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.275739 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.275744 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-04 00:56:39.275749 | orchestrator | 2025-05-04 00:56:39.275754 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-04 00:56:39.275759 | orchestrator | Sunday 04 May 2025 00:54:37 +0000 (0:00:00.768) 0:11:10.149 ************ 2025-05-04 00:56:39.275776 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.275781 | orchestrator | 2025-05-04 00:56:39.275786 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-04 00:56:39.275791 | orchestrator | Sunday 04 May 2025 00:54:39 +0000 (0:00:01.814) 0:11:11.963 ************ 2025-05-04 00:56:39.275808 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-04 00:56:39.275815 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.275820 | orchestrator | 2025-05-04 00:56:39.275825 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-04 00:56:39.275830 | orchestrator | Sunday 04 May 2025 00:54:40 +0000 (0:00:00.387) 0:11:12.351 ************ 2025-05-04 00:56:39.275838 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:56:39.275844 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:56:39.275849 | orchestrator | 2025-05-04 00:56:39.275854 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-04 00:56:39.275859 | orchestrator | Sunday 04 May 2025 00:54:46 +0000 (0:00:06.262) 0:11:18.613 ************ 2025-05-04 00:56:39.275864 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 00:56:39.275869 | orchestrator | 2025-05-04 00:56:39.275874 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-04 00:56:39.275879 | orchestrator | Sunday 04 May 2025 00:54:49 +0000 (0:00:02.774) 0:11:21.387 ************ 2025-05-04 00:56:39.275884 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.275889 | orchestrator | 2025-05-04 00:56:39.275894 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-04 00:56:39.275899 | orchestrator | Sunday 04 May 2025 00:54:49 +0000 (0:00:00.714) 0:11:22.102 ************ 2025-05-04 00:56:39.275904 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-04 00:56:39.275909 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-04 00:56:39.275914 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-04 00:56:39.275919 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-04 00:56:39.275924 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-04 00:56:39.275928 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-04 00:56:39.275933 | orchestrator | 2025-05-04 00:56:39.275938 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-04 00:56:39.275943 | orchestrator | Sunday 04 May 2025 00:54:51 +0000 (0:00:01.167) 0:11:23.270 ************ 2025-05-04 00:56:39.275948 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:56:39.275953 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.275958 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-04 00:56:39.275963 | orchestrator | 2025-05-04 00:56:39.275968 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-04 00:56:39.275973 | orchestrator | Sunday 04 May 2025 00:54:52 +0000 (0:00:01.756) 0:11:25.026 ************ 2025-05-04 00:56:39.275978 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-04 00:56:39.275983 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.275988 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-04 00:56:39.275993 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.275998 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276006 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276011 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-04 00:56:39.276016 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.276021 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276026 | orchestrator | 2025-05-04 00:56:39.276031 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-04 00:56:39.276036 | orchestrator | Sunday 04 May 2025 00:54:54 +0000 (0:00:01.305) 0:11:26.332 ************ 2025-05-04 00:56:39.276040 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276045 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276050 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276055 | orchestrator | 2025-05-04 00:56:39.276060 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-04 00:56:39.276065 | orchestrator | Sunday 04 May 2025 00:54:54 +0000 (0:00:00.364) 0:11:26.696 ************ 2025-05-04 00:56:39.276070 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.276075 | orchestrator | 2025-05-04 00:56:39.276080 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-04 00:56:39.276085 | orchestrator | Sunday 04 May 2025 00:54:55 +0000 (0:00:00.610) 0:11:27.307 ************ 2025-05-04 00:56:39.276090 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.276095 | orchestrator | 2025-05-04 00:56:39.276100 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-04 00:56:39.276105 | orchestrator | Sunday 04 May 2025 00:54:56 +0000 (0:00:00.984) 0:11:28.291 ************ 2025-05-04 00:56:39.276110 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276115 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276120 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276125 | orchestrator | 2025-05-04 00:56:39.276130 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-04 00:56:39.276135 | orchestrator | Sunday 04 May 2025 00:54:57 +0000 (0:00:01.251) 0:11:29.543 ************ 2025-05-04 00:56:39.276140 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276145 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276149 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276154 | orchestrator | 2025-05-04 00:56:39.276162 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-04 00:56:39.276167 | orchestrator | Sunday 04 May 2025 00:54:58 +0000 (0:00:01.181) 0:11:30.724 ************ 2025-05-04 00:56:39.276174 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276179 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276184 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276189 | orchestrator | 2025-05-04 00:56:39.276194 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-04 00:56:39.276199 | orchestrator | Sunday 04 May 2025 00:55:00 +0000 (0:00:01.875) 0:11:32.599 ************ 2025-05-04 00:56:39.276204 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276212 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276219 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276226 | orchestrator | 2025-05-04 00:56:39.276234 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-04 00:56:39.276241 | orchestrator | Sunday 04 May 2025 00:55:02 +0000 (0:00:01.846) 0:11:34.445 ************ 2025-05-04 00:56:39.276249 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-04 00:56:39.276257 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-04 00:56:39.276264 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-04 00:56:39.276269 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276274 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276279 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276287 | orchestrator | 2025-05-04 00:56:39.276292 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-04 00:56:39.276297 | orchestrator | Sunday 04 May 2025 00:55:19 +0000 (0:00:16.912) 0:11:51.357 ************ 2025-05-04 00:56:39.276302 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276307 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276311 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276316 | orchestrator | 2025-05-04 00:56:39.276321 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-04 00:56:39.276326 | orchestrator | Sunday 04 May 2025 00:55:19 +0000 (0:00:00.670) 0:11:52.028 ************ 2025-05-04 00:56:39.276331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.276336 | orchestrator | 2025-05-04 00:56:39.276341 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-04 00:56:39.276346 | orchestrator | Sunday 04 May 2025 00:55:20 +0000 (0:00:00.671) 0:11:52.699 ************ 2025-05-04 00:56:39.276351 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276356 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276361 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276365 | orchestrator | 2025-05-04 00:56:39.276370 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-04 00:56:39.276375 | orchestrator | Sunday 04 May 2025 00:55:20 +0000 (0:00:00.259) 0:11:52.959 ************ 2025-05-04 00:56:39.276380 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276385 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276390 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276395 | orchestrator | 2025-05-04 00:56:39.276400 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-04 00:56:39.276405 | orchestrator | Sunday 04 May 2025 00:55:21 +0000 (0:00:01.165) 0:11:54.124 ************ 2025-05-04 00:56:39.276410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.276415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.276420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.276424 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276429 | orchestrator | 2025-05-04 00:56:39.276434 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-04 00:56:39.276439 | orchestrator | Sunday 04 May 2025 00:55:22 +0000 (0:00:00.955) 0:11:55.080 ************ 2025-05-04 00:56:39.276444 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276449 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276454 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276459 | orchestrator | 2025-05-04 00:56:39.276464 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.276469 | orchestrator | Sunday 04 May 2025 00:55:23 +0000 (0:00:00.297) 0:11:55.377 ************ 2025-05-04 00:56:39.276474 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.276479 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.276484 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.276488 | orchestrator | 2025-05-04 00:56:39.276493 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-04 00:56:39.276498 | orchestrator | 2025-05-04 00:56:39.276503 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-04 00:56:39.276508 | orchestrator | Sunday 04 May 2025 00:55:25 +0000 (0:00:02.155) 0:11:57.533 ************ 2025-05-04 00:56:39.276513 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.276520 | orchestrator | 2025-05-04 00:56:39.276525 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-04 00:56:39.276530 | orchestrator | Sunday 04 May 2025 00:55:26 +0000 (0:00:00.797) 0:11:58.330 ************ 2025-05-04 00:56:39.276535 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276543 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276548 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276553 | orchestrator | 2025-05-04 00:56:39.276558 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-04 00:56:39.276563 | orchestrator | Sunday 04 May 2025 00:55:26 +0000 (0:00:00.336) 0:11:58.667 ************ 2025-05-04 00:56:39.276568 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276573 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276578 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276583 | orchestrator | 2025-05-04 00:56:39.276588 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-04 00:56:39.276593 | orchestrator | Sunday 04 May 2025 00:55:27 +0000 (0:00:00.729) 0:11:59.397 ************ 2025-05-04 00:56:39.276597 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276602 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276614 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276619 | orchestrator | 2025-05-04 00:56:39.276624 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-04 00:56:39.276629 | orchestrator | Sunday 04 May 2025 00:55:28 +0000 (0:00:01.008) 0:12:00.405 ************ 2025-05-04 00:56:39.276634 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276639 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276644 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276649 | orchestrator | 2025-05-04 00:56:39.276654 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-04 00:56:39.276659 | orchestrator | Sunday 04 May 2025 00:55:28 +0000 (0:00:00.747) 0:12:01.153 ************ 2025-05-04 00:56:39.276664 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276669 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276674 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276679 | orchestrator | 2025-05-04 00:56:39.276684 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-04 00:56:39.276689 | orchestrator | Sunday 04 May 2025 00:55:29 +0000 (0:00:00.325) 0:12:01.478 ************ 2025-05-04 00:56:39.276694 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276699 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276704 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276709 | orchestrator | 2025-05-04 00:56:39.276714 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-04 00:56:39.276719 | orchestrator | Sunday 04 May 2025 00:55:29 +0000 (0:00:00.327) 0:12:01.805 ************ 2025-05-04 00:56:39.276723 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276728 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276733 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276738 | orchestrator | 2025-05-04 00:56:39.276743 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-04 00:56:39.276748 | orchestrator | Sunday 04 May 2025 00:55:30 +0000 (0:00:00.643) 0:12:02.448 ************ 2025-05-04 00:56:39.276753 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276758 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276774 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276780 | orchestrator | 2025-05-04 00:56:39.276785 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-04 00:56:39.276790 | orchestrator | Sunday 04 May 2025 00:55:30 +0000 (0:00:00.349) 0:12:02.798 ************ 2025-05-04 00:56:39.276795 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276799 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276804 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276809 | orchestrator | 2025-05-04 00:56:39.276814 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-04 00:56:39.276819 | orchestrator | Sunday 04 May 2025 00:55:30 +0000 (0:00:00.332) 0:12:03.131 ************ 2025-05-04 00:56:39.276824 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276829 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276837 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276842 | orchestrator | 2025-05-04 00:56:39.276847 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-04 00:56:39.276852 | orchestrator | Sunday 04 May 2025 00:55:31 +0000 (0:00:00.337) 0:12:03.469 ************ 2025-05-04 00:56:39.276857 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276862 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276867 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276872 | orchestrator | 2025-05-04 00:56:39.276877 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-04 00:56:39.276882 | orchestrator | Sunday 04 May 2025 00:55:32 +0000 (0:00:01.098) 0:12:04.568 ************ 2025-05-04 00:56:39.276887 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276892 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276896 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276901 | orchestrator | 2025-05-04 00:56:39.276906 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-04 00:56:39.276911 | orchestrator | Sunday 04 May 2025 00:55:32 +0000 (0:00:00.335) 0:12:04.904 ************ 2025-05-04 00:56:39.276916 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.276921 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.276926 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.276931 | orchestrator | 2025-05-04 00:56:39.276936 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-04 00:56:39.276941 | orchestrator | Sunday 04 May 2025 00:55:33 +0000 (0:00:00.344) 0:12:05.248 ************ 2025-05-04 00:56:39.276946 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276951 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276956 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276961 | orchestrator | 2025-05-04 00:56:39.276965 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-04 00:56:39.276970 | orchestrator | Sunday 04 May 2025 00:55:33 +0000 (0:00:00.347) 0:12:05.596 ************ 2025-05-04 00:56:39.276975 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.276980 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.276985 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.276990 | orchestrator | 2025-05-04 00:56:39.276995 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-04 00:56:39.277000 | orchestrator | Sunday 04 May 2025 00:55:34 +0000 (0:00:00.665) 0:12:06.261 ************ 2025-05-04 00:56:39.277005 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.277010 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.277014 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.277019 | orchestrator | 2025-05-04 00:56:39.277024 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-04 00:56:39.277029 | orchestrator | Sunday 04 May 2025 00:55:34 +0000 (0:00:00.361) 0:12:06.622 ************ 2025-05-04 00:56:39.277034 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277039 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277044 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277049 | orchestrator | 2025-05-04 00:56:39.277054 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-04 00:56:39.277059 | orchestrator | Sunday 04 May 2025 00:55:34 +0000 (0:00:00.346) 0:12:06.968 ************ 2025-05-04 00:56:39.277063 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277068 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277073 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277078 | orchestrator | 2025-05-04 00:56:39.277085 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-04 00:56:39.277091 | orchestrator | Sunday 04 May 2025 00:55:35 +0000 (0:00:00.367) 0:12:07.336 ************ 2025-05-04 00:56:39.277095 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277101 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277105 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277110 | orchestrator | 2025-05-04 00:56:39.277120 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-04 00:56:39.277126 | orchestrator | Sunday 04 May 2025 00:55:35 +0000 (0:00:00.680) 0:12:08.017 ************ 2025-05-04 00:56:39.277131 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.277136 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.277143 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.277148 | orchestrator | 2025-05-04 00:56:39.277153 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-04 00:56:39.277160 | orchestrator | Sunday 04 May 2025 00:55:36 +0000 (0:00:00.394) 0:12:08.411 ************ 2025-05-04 00:56:39.277165 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277170 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277175 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277180 | orchestrator | 2025-05-04 00:56:39.277185 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-04 00:56:39.277190 | orchestrator | Sunday 04 May 2025 00:55:36 +0000 (0:00:00.346) 0:12:08.758 ************ 2025-05-04 00:56:39.277195 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277200 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277205 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277210 | orchestrator | 2025-05-04 00:56:39.277215 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-04 00:56:39.277220 | orchestrator | Sunday 04 May 2025 00:55:36 +0000 (0:00:00.354) 0:12:09.112 ************ 2025-05-04 00:56:39.277225 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277230 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277235 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277240 | orchestrator | 2025-05-04 00:56:39.277244 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-04 00:56:39.277249 | orchestrator | Sunday 04 May 2025 00:55:37 +0000 (0:00:00.632) 0:12:09.745 ************ 2025-05-04 00:56:39.277254 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277259 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277264 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277269 | orchestrator | 2025-05-04 00:56:39.277274 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-04 00:56:39.277279 | orchestrator | Sunday 04 May 2025 00:55:37 +0000 (0:00:00.350) 0:12:10.095 ************ 2025-05-04 00:56:39.277284 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277289 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277294 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277299 | orchestrator | 2025-05-04 00:56:39.277304 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-04 00:56:39.277309 | orchestrator | Sunday 04 May 2025 00:55:38 +0000 (0:00:00.375) 0:12:10.471 ************ 2025-05-04 00:56:39.277314 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277319 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277324 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277329 | orchestrator | 2025-05-04 00:56:39.277334 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-04 00:56:39.277339 | orchestrator | Sunday 04 May 2025 00:55:38 +0000 (0:00:00.336) 0:12:10.808 ************ 2025-05-04 00:56:39.277343 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277348 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277353 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277358 | orchestrator | 2025-05-04 00:56:39.277363 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-04 00:56:39.277368 | orchestrator | Sunday 04 May 2025 00:55:39 +0000 (0:00:00.632) 0:12:11.441 ************ 2025-05-04 00:56:39.277373 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277378 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277383 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277388 | orchestrator | 2025-05-04 00:56:39.277393 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-04 00:56:39.277401 | orchestrator | Sunday 04 May 2025 00:55:39 +0000 (0:00:00.332) 0:12:11.773 ************ 2025-05-04 00:56:39.277406 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277411 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277416 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277421 | orchestrator | 2025-05-04 00:56:39.277426 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-04 00:56:39.277431 | orchestrator | Sunday 04 May 2025 00:55:39 +0000 (0:00:00.328) 0:12:12.102 ************ 2025-05-04 00:56:39.277436 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277441 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277446 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277451 | orchestrator | 2025-05-04 00:56:39.277456 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-04 00:56:39.277461 | orchestrator | Sunday 04 May 2025 00:55:40 +0000 (0:00:00.348) 0:12:12.451 ************ 2025-05-04 00:56:39.277466 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277471 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277476 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277481 | orchestrator | 2025-05-04 00:56:39.277486 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-04 00:56:39.277491 | orchestrator | Sunday 04 May 2025 00:55:40 +0000 (0:00:00.614) 0:12:13.065 ************ 2025-05-04 00:56:39.277496 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277501 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277506 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277511 | orchestrator | 2025-05-04 00:56:39.277516 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-04 00:56:39.277523 | orchestrator | Sunday 04 May 2025 00:55:41 +0000 (0:00:00.348) 0:12:13.414 ************ 2025-05-04 00:56:39.277528 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.277533 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-04 00:56:39.277538 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277543 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.277548 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-04 00:56:39.277553 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277558 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.277563 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-04 00:56:39.277568 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277573 | orchestrator | 2025-05-04 00:56:39.277578 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-04 00:56:39.277583 | orchestrator | Sunday 04 May 2025 00:55:41 +0000 (0:00:00.401) 0:12:13.815 ************ 2025-05-04 00:56:39.277588 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-04 00:56:39.277595 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-04 00:56:39.277600 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277605 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-04 00:56:39.277610 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-04 00:56:39.277615 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277620 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-04 00:56:39.277625 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-04 00:56:39.277630 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277635 | orchestrator | 2025-05-04 00:56:39.277639 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-04 00:56:39.277644 | orchestrator | Sunday 04 May 2025 00:55:41 +0000 (0:00:00.373) 0:12:14.188 ************ 2025-05-04 00:56:39.277649 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277654 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277663 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277668 | orchestrator | 2025-05-04 00:56:39.277673 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-04 00:56:39.277678 | orchestrator | Sunday 04 May 2025 00:55:42 +0000 (0:00:00.648) 0:12:14.837 ************ 2025-05-04 00:56:39.277683 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277688 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277693 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277698 | orchestrator | 2025-05-04 00:56:39.277703 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:56:39.277708 | orchestrator | Sunday 04 May 2025 00:55:42 +0000 (0:00:00.350) 0:12:15.187 ************ 2025-05-04 00:56:39.277713 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277718 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277725 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277730 | orchestrator | 2025-05-04 00:56:39.277735 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:56:39.277740 | orchestrator | Sunday 04 May 2025 00:55:43 +0000 (0:00:00.317) 0:12:15.505 ************ 2025-05-04 00:56:39.277745 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277750 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277754 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277759 | orchestrator | 2025-05-04 00:56:39.277791 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:56:39.277797 | orchestrator | Sunday 04 May 2025 00:55:43 +0000 (0:00:00.340) 0:12:15.846 ************ 2025-05-04 00:56:39.277802 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277807 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277812 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277817 | orchestrator | 2025-05-04 00:56:39.277822 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:56:39.277827 | orchestrator | Sunday 04 May 2025 00:55:44 +0000 (0:00:00.634) 0:12:16.480 ************ 2025-05-04 00:56:39.277832 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277837 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277842 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277846 | orchestrator | 2025-05-04 00:56:39.277851 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:56:39.277856 | orchestrator | Sunday 04 May 2025 00:55:44 +0000 (0:00:00.370) 0:12:16.850 ************ 2025-05-04 00:56:39.277861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.277866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.277871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.277876 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277881 | orchestrator | 2025-05-04 00:56:39.277886 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:56:39.277891 | orchestrator | Sunday 04 May 2025 00:55:45 +0000 (0:00:00.436) 0:12:17.287 ************ 2025-05-04 00:56:39.277896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.277901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.277906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.277911 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277916 | orchestrator | 2025-05-04 00:56:39.277920 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:56:39.277925 | orchestrator | Sunday 04 May 2025 00:55:45 +0000 (0:00:00.455) 0:12:17.742 ************ 2025-05-04 00:56:39.277930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.277935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.277940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.277949 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277954 | orchestrator | 2025-05-04 00:56:39.277959 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.277966 | orchestrator | Sunday 04 May 2025 00:55:45 +0000 (0:00:00.452) 0:12:18.195 ************ 2025-05-04 00:56:39.277971 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.277976 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.277981 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.277985 | orchestrator | 2025-05-04 00:56:39.277990 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:56:39.277995 | orchestrator | Sunday 04 May 2025 00:55:46 +0000 (0:00:00.360) 0:12:18.555 ************ 2025-05-04 00:56:39.278000 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.278005 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278010 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.278035 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278040 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.278045 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278050 | orchestrator | 2025-05-04 00:56:39.278055 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:56:39.278060 | orchestrator | Sunday 04 May 2025 00:55:47 +0000 (0:00:00.882) 0:12:19.437 ************ 2025-05-04 00:56:39.278065 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278070 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278075 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278080 | orchestrator | 2025-05-04 00:56:39.278085 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:56:39.278090 | orchestrator | Sunday 04 May 2025 00:55:47 +0000 (0:00:00.412) 0:12:19.850 ************ 2025-05-04 00:56:39.278095 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278100 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278105 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278109 | orchestrator | 2025-05-04 00:56:39.278114 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:56:39.278119 | orchestrator | Sunday 04 May 2025 00:55:48 +0000 (0:00:00.356) 0:12:20.206 ************ 2025-05-04 00:56:39.278124 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:56:39.278129 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278134 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:56:39.278139 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278144 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:56:39.278149 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278154 | orchestrator | 2025-05-04 00:56:39.278159 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:56:39.278164 | orchestrator | Sunday 04 May 2025 00:55:48 +0000 (0:00:00.478) 0:12:20.685 ************ 2025-05-04 00:56:39.278169 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.278177 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278182 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.278187 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278192 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:56:39.278197 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278202 | orchestrator | 2025-05-04 00:56:39.278207 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:56:39.278212 | orchestrator | Sunday 04 May 2025 00:55:49 +0000 (0:00:00.710) 0:12:21.396 ************ 2025-05-04 00:56:39.278217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.278225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.278230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.278235 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:56:39.278240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:56:39.278244 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:56:39.278249 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278254 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:56:39.278264 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:56:39.278269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:56:39.278274 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278279 | orchestrator | 2025-05-04 00:56:39.278284 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-04 00:56:39.278289 | orchestrator | Sunday 04 May 2025 00:55:49 +0000 (0:00:00.637) 0:12:22.033 ************ 2025-05-04 00:56:39.278294 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278298 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278303 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278308 | orchestrator | 2025-05-04 00:56:39.278313 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-04 00:56:39.278318 | orchestrator | Sunday 04 May 2025 00:55:50 +0000 (0:00:00.869) 0:12:22.902 ************ 2025-05-04 00:56:39.278323 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.278328 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278333 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.278338 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278343 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.278348 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278353 | orchestrator | 2025-05-04 00:56:39.278358 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-04 00:56:39.278363 | orchestrator | Sunday 04 May 2025 00:55:51 +0000 (0:00:00.609) 0:12:23.512 ************ 2025-05-04 00:56:39.278367 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278377 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278382 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278387 | orchestrator | 2025-05-04 00:56:39.278392 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-04 00:56:39.278397 | orchestrator | Sunday 04 May 2025 00:55:52 +0000 (0:00:00.845) 0:12:24.357 ************ 2025-05-04 00:56:39.278402 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278407 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278412 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278417 | orchestrator | 2025-05-04 00:56:39.278422 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-04 00:56:39.278427 | orchestrator | Sunday 04 May 2025 00:55:52 +0000 (0:00:00.553) 0:12:24.911 ************ 2025-05-04 00:56:39.278432 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.278437 | orchestrator | 2025-05-04 00:56:39.278442 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-04 00:56:39.278446 | orchestrator | Sunday 04 May 2025 00:55:53 +0000 (0:00:00.904) 0:12:25.816 ************ 2025-05-04 00:56:39.278451 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-04 00:56:39.278456 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-04 00:56:39.278461 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-04 00:56:39.278466 | orchestrator | 2025-05-04 00:56:39.278471 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-04 00:56:39.278479 | orchestrator | Sunday 04 May 2025 00:55:54 +0000 (0:00:00.756) 0:12:26.573 ************ 2025-05-04 00:56:39.278484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:56:39.278489 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.278494 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-04 00:56:39.278499 | orchestrator | 2025-05-04 00:56:39.278504 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-04 00:56:39.278509 | orchestrator | Sunday 04 May 2025 00:55:56 +0000 (0:00:01.823) 0:12:28.397 ************ 2025-05-04 00:56:39.278514 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-04 00:56:39.278519 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-04 00:56:39.278524 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.278529 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-04 00:56:39.278534 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-04 00:56:39.278539 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.278543 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-04 00:56:39.278548 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-04 00:56:39.278553 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.278558 | orchestrator | 2025-05-04 00:56:39.278563 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-04 00:56:39.278568 | orchestrator | Sunday 04 May 2025 00:55:57 +0000 (0:00:01.510) 0:12:29.907 ************ 2025-05-04 00:56:39.278573 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278578 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278583 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278588 | orchestrator | 2025-05-04 00:56:39.278593 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-04 00:56:39.278598 | orchestrator | Sunday 04 May 2025 00:55:58 +0000 (0:00:00.372) 0:12:30.279 ************ 2025-05-04 00:56:39.278603 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278608 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278613 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278618 | orchestrator | 2025-05-04 00:56:39.278623 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-04 00:56:39.278628 | orchestrator | Sunday 04 May 2025 00:55:58 +0000 (0:00:00.351) 0:12:30.630 ************ 2025-05-04 00:56:39.278633 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-04 00:56:39.278638 | orchestrator | 2025-05-04 00:56:39.278643 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-04 00:56:39.278648 | orchestrator | Sunday 04 May 2025 00:55:58 +0000 (0:00:00.249) 0:12:30.880 ************ 2025-05-04 00:56:39.278653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278680 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278685 | orchestrator | 2025-05-04 00:56:39.278690 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-04 00:56:39.278695 | orchestrator | Sunday 04 May 2025 00:55:59 +0000 (0:00:00.786) 0:12:31.667 ************ 2025-05-04 00:56:39.278700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278733 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278738 | orchestrator | 2025-05-04 00:56:39.278743 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-04 00:56:39.278748 | orchestrator | Sunday 04 May 2025 00:56:00 +0000 (0:00:00.812) 0:12:32.479 ************ 2025-05-04 00:56:39.278753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-04 00:56:39.278792 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278797 | orchestrator | 2025-05-04 00:56:39.278802 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-04 00:56:39.278807 | orchestrator | Sunday 04 May 2025 00:56:00 +0000 (0:00:00.580) 0:12:33.060 ************ 2025-05-04 00:56:39.278812 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-04 00:56:39.278818 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-04 00:56:39.278823 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-04 00:56:39.278828 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-04 00:56:39.278833 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-04 00:56:39.278838 | orchestrator | 2025-05-04 00:56:39.278843 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-04 00:56:39.278848 | orchestrator | Sunday 04 May 2025 00:56:23 +0000 (0:00:22.754) 0:12:55.815 ************ 2025-05-04 00:56:39.278853 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278857 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278862 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278867 | orchestrator | 2025-05-04 00:56:39.278872 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-04 00:56:39.278877 | orchestrator | Sunday 04 May 2025 00:56:24 +0000 (0:00:00.521) 0:12:56.336 ************ 2025-05-04 00:56:39.278882 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.278887 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.278892 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.278899 | orchestrator | 2025-05-04 00:56:39.278904 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-04 00:56:39.278909 | orchestrator | Sunday 04 May 2025 00:56:24 +0000 (0:00:00.358) 0:12:56.695 ************ 2025-05-04 00:56:39.278917 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.278922 | orchestrator | 2025-05-04 00:56:39.278929 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-04 00:56:39.278934 | orchestrator | Sunday 04 May 2025 00:56:25 +0000 (0:00:00.598) 0:12:57.294 ************ 2025-05-04 00:56:39.278939 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.278944 | orchestrator | 2025-05-04 00:56:39.278949 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-04 00:56:39.278954 | orchestrator | Sunday 04 May 2025 00:56:25 +0000 (0:00:00.883) 0:12:58.177 ************ 2025-05-04 00:56:39.278958 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.278963 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.278968 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.278973 | orchestrator | 2025-05-04 00:56:39.278978 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-04 00:56:39.278983 | orchestrator | Sunday 04 May 2025 00:56:27 +0000 (0:00:01.248) 0:12:59.426 ************ 2025-05-04 00:56:39.278988 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.278993 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.278998 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.279003 | orchestrator | 2025-05-04 00:56:39.279008 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-04 00:56:39.279014 | orchestrator | Sunday 04 May 2025 00:56:28 +0000 (0:00:01.179) 0:13:00.605 ************ 2025-05-04 00:56:39.279020 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.279024 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.279029 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.279034 | orchestrator | 2025-05-04 00:56:39.279039 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-04 00:56:39.279044 | orchestrator | Sunday 04 May 2025 00:56:30 +0000 (0:00:02.094) 0:13:02.699 ************ 2025-05-04 00:56:39.279049 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.279054 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.279059 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-04 00:56:39.279064 | orchestrator | 2025-05-04 00:56:39.279069 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-04 00:56:39.279074 | orchestrator | Sunday 04 May 2025 00:56:32 +0000 (0:00:02.038) 0:13:04.738 ************ 2025-05-04 00:56:39.279079 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.279084 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:56:39.279089 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:56:39.279094 | orchestrator | 2025-05-04 00:56:39.279099 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-04 00:56:39.279104 | orchestrator | Sunday 04 May 2025 00:56:33 +0000 (0:00:00.984) 0:13:05.723 ************ 2025-05-04 00:56:39.279109 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.279114 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.279119 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.279123 | orchestrator | 2025-05-04 00:56:39.279128 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-04 00:56:39.279133 | orchestrator | Sunday 04 May 2025 00:56:34 +0000 (0:00:00.699) 0:13:06.423 ************ 2025-05-04 00:56:39.279138 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:56:39.279146 | orchestrator | 2025-05-04 00:56:39.279151 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-04 00:56:39.279156 | orchestrator | Sunday 04 May 2025 00:56:34 +0000 (0:00:00.641) 0:13:07.065 ************ 2025-05-04 00:56:39.279161 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.279166 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.279171 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.279176 | orchestrator | 2025-05-04 00:56:39.279181 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-04 00:56:39.279186 | orchestrator | Sunday 04 May 2025 00:56:35 +0000 (0:00:00.278) 0:13:07.343 ************ 2025-05-04 00:56:39.279191 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.279196 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.279201 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.279206 | orchestrator | 2025-05-04 00:56:39.279211 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-04 00:56:39.279216 | orchestrator | Sunday 04 May 2025 00:56:36 +0000 (0:00:01.168) 0:13:08.512 ************ 2025-05-04 00:56:39.279221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:56:39.279225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:56:39.279230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:56:39.279235 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:56:39.279240 | orchestrator | 2025-05-04 00:56:39.279245 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-04 00:56:39.279250 | orchestrator | Sunday 04 May 2025 00:56:37 +0000 (0:00:00.784) 0:13:09.296 ************ 2025-05-04 00:56:39.279255 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:56:39.279260 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:56:39.279265 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:56:39.279270 | orchestrator | 2025-05-04 00:56:39.279275 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-04 00:56:39.279280 | orchestrator | Sunday 04 May 2025 00:56:37 +0000 (0:00:00.303) 0:13:09.599 ************ 2025-05-04 00:56:39.279285 | orchestrator | changed: [testbed-node-3] 2025-05-04 00:56:39.279289 | orchestrator | changed: [testbed-node-4] 2025-05-04 00:56:39.279294 | orchestrator | changed: [testbed-node-5] 2025-05-04 00:56:39.279299 | orchestrator | 2025-05-04 00:56:39.279304 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:56:39.279309 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-04 00:56:39.279314 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-04 00:56:39.279319 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-04 00:56:39.279324 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-04 00:56:39.279329 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-04 00:56:39.279334 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-04 00:56:39.279339 | orchestrator | 2025-05-04 00:56:39.279344 | orchestrator | 2025-05-04 00:56:39.279349 | orchestrator | 2025-05-04 00:56:39.279356 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:56:42.283316 | orchestrator | Sunday 04 May 2025 00:56:38 +0000 (0:00:01.143) 0:13:10.743 ************ 2025-05-04 00:56:42.283446 | orchestrator | =============================================================================== 2025-05-04 00:56:42.283547 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 46.14s 2025-05-04 00:56:42.283566 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 41.25s 2025-05-04 00:56:42.283607 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 22.75s 2025-05-04 00:56:42.283623 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.50s 2025-05-04 00:56:42.283637 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 16.91s 2025-05-04 00:56:42.283651 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.76s 2025-05-04 00:56:42.283666 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.36s 2025-05-04 00:56:42.283680 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.84s 2025-05-04 00:56:42.283694 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.13s 2025-05-04 00:56:42.283709 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.61s 2025-05-04 00:56:42.283723 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.26s 2025-05-04 00:56:42.283738 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.08s 2025-05-04 00:56:42.283752 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.63s 2025-05-04 00:56:42.283803 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.56s 2025-05-04 00:56:42.283819 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.04s 2025-05-04 00:56:42.283835 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 3.99s 2025-05-04 00:56:42.283851 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.74s 2025-05-04 00:56:42.283868 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.70s 2025-05-04 00:56:42.283884 | orchestrator | ceph-handler : set _crash_handler_called after restart ------------------ 3.48s 2025-05-04 00:56:42.283900 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.17s 2025-05-04 00:56:42.283917 | orchestrator | 2025-05-04 00:56:39 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:42.283939 | orchestrator | 2025-05-04 00:56:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:42.283956 | orchestrator | 2025-05-04 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:42.283990 | orchestrator | 2025-05-04 00:56:42 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:42.284384 | orchestrator | 2025-05-04 00:56:42 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:56:42.286131 | orchestrator | 2025-05-04 00:56:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:42.286244 | orchestrator | 2025-05-04 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:45.330234 | orchestrator | 2025-05-04 00:56:45 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:45.330572 | orchestrator | 2025-05-04 00:56:45 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:56:45.332978 | orchestrator | 2025-05-04 00:56:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:48.399602 | orchestrator | 2025-05-04 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:48.399753 | orchestrator | 2025-05-04 00:56:48 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:48.400540 | orchestrator | 2025-05-04 00:56:48 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:56:48.403451 | orchestrator | 2025-05-04 00:56:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:48.403587 | orchestrator | 2025-05-04 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:51.459946 | orchestrator | 2025-05-04 00:56:51 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:51.463118 | orchestrator | 2025-05-04 00:56:51 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:56:51.464968 | orchestrator | 2025-05-04 00:56:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:51.465139 | orchestrator | 2025-05-04 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:54.517523 | orchestrator | 2025-05-04 00:56:54 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:54.518466 | orchestrator | 2025-05-04 00:56:54 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:56:54.519718 | orchestrator | 2025-05-04 00:56:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:56:54.520007 | orchestrator | 2025-05-04 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:56:57.560341 | orchestrator | 2025-05-04 00:56:57 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state STARTED 2025-05-04 00:56:57.560939 | orchestrator | 2025-05-04 00:56:57 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:56:57.560982 | orchestrator | 2025-05-04 00:56:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:00.611101 | orchestrator | 2025-05-04 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:00.611276 | orchestrator | 2025-05-04 00:57:00 | INFO  | Task 79d522b6-4e4d-4070-9087-d88928e6d743 is in state SUCCESS 2025-05-04 00:57:00.612646 | orchestrator | 2025-05-04 00:57:00.612695 | orchestrator | 2025-05-04 00:57:00.612712 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-04 00:57:00.612727 | orchestrator | 2025-05-04 00:57:00.612742 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-04 00:57:00.612757 | orchestrator | Sunday 04 May 2025 00:53:30 +0000 (0:00:00.167) 0:00:00.167 ************ 2025-05-04 00:57:00.612810 | orchestrator | ok: [localhost] => { 2025-05-04 00:57:00.612828 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-04 00:57:00.612842 | orchestrator | } 2025-05-04 00:57:00.612857 | orchestrator | 2025-05-04 00:57:00.612871 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-04 00:57:00.612886 | orchestrator | Sunday 04 May 2025 00:53:30 +0000 (0:00:00.053) 0:00:00.221 ************ 2025-05-04 00:57:00.612900 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-04 00:57:00.612917 | orchestrator | ...ignoring 2025-05-04 00:57:00.612931 | orchestrator | 2025-05-04 00:57:00.612945 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-04 00:57:00.612959 | orchestrator | Sunday 04 May 2025 00:53:33 +0000 (0:00:02.557) 0:00:02.778 ************ 2025-05-04 00:57:00.612974 | orchestrator | skipping: [localhost] 2025-05-04 00:57:00.612988 | orchestrator | 2025-05-04 00:57:00.613002 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-04 00:57:00.613016 | orchestrator | Sunday 04 May 2025 00:53:33 +0000 (0:00:00.086) 0:00:02.864 ************ 2025-05-04 00:57:00.613030 | orchestrator | ok: [localhost] 2025-05-04 00:57:00.613044 | orchestrator | 2025-05-04 00:57:00.613059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:57:00.613102 | orchestrator | 2025-05-04 00:57:00.613117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:57:00.613131 | orchestrator | Sunday 04 May 2025 00:53:33 +0000 (0:00:00.146) 0:00:03.011 ************ 2025-05-04 00:57:00.613145 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.613160 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.613174 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.613188 | orchestrator | 2025-05-04 00:57:00.613203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:57:00.613217 | orchestrator | Sunday 04 May 2025 00:53:34 +0000 (0:00:00.533) 0:00:03.545 ************ 2025-05-04 00:57:00.613232 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-04 00:57:00.613270 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-04 00:57:00.613285 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-04 00:57:00.613299 | orchestrator | 2025-05-04 00:57:00.613313 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-04 00:57:00.613328 | orchestrator | 2025-05-04 00:57:00.613342 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-04 00:57:00.613356 | orchestrator | Sunday 04 May 2025 00:53:34 +0000 (0:00:00.432) 0:00:03.977 ************ 2025-05-04 00:57:00.613370 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:57:00.613384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-04 00:57:00.613398 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-04 00:57:00.613411 | orchestrator | 2025-05-04 00:57:00.613425 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-04 00:57:00.613440 | orchestrator | Sunday 04 May 2025 00:53:35 +0000 (0:00:00.773) 0:00:04.750 ************ 2025-05-04 00:57:00.613454 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:57:00.613470 | orchestrator | 2025-05-04 00:57:00.613484 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-04 00:57:00.613498 | orchestrator | Sunday 04 May 2025 00:53:36 +0000 (0:00:00.788) 0:00:05.539 ************ 2025-05-04 00:57:00.613530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.613559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.613577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.613594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.613618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.613642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.613657 | orchestrator | 2025-05-04 00:57:00.613672 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-04 00:57:00.613686 | orchestrator | Sunday 04 May 2025 00:53:40 +0000 (0:00:04.897) 0:00:10.437 ************ 2025-05-04 00:57:00.613701 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.613716 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.613730 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.613744 | orchestrator | 2025-05-04 00:57:00.613758 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-04 00:57:00.613820 | orchestrator | Sunday 04 May 2025 00:53:41 +0000 (0:00:00.770) 0:00:11.207 ************ 2025-05-04 00:57:00.613835 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.613849 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.613877 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.613901 | orchestrator | 2025-05-04 00:57:00.613926 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-04 00:57:00.613950 | orchestrator | Sunday 04 May 2025 00:53:43 +0000 (0:00:01.483) 0:00:12.691 ************ 2025-05-04 00:57:00.614118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.614158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.614175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.614208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.614225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.614241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.614256 | orchestrator | 2025-05-04 00:57:00.614271 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-04 00:57:00.614285 | orchestrator | Sunday 04 May 2025 00:53:48 +0000 (0:00:04.825) 0:00:17.517 ************ 2025-05-04 00:57:00.614300 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.614315 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.614329 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.614343 | orchestrator | 2025-05-04 00:57:00.614357 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-04 00:57:00.614371 | orchestrator | Sunday 04 May 2025 00:53:49 +0000 (0:00:01.137) 0:00:18.655 ************ 2025-05-04 00:57:00.614386 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:57:00.614400 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.614414 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:57:00.614428 | orchestrator | 2025-05-04 00:57:00.614442 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-04 00:57:00.614456 | orchestrator | Sunday 04 May 2025 00:53:57 +0000 (0:00:08.374) 0:00:27.029 ************ 2025-05-04 00:57:00.614479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.614502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.614519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.614542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-04 00:57:00.614566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.614582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-04 00:57:00.614597 | orchestrator | 2025-05-04 00:57:00.614611 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-04 00:57:00.614626 | orchestrator | Sunday 04 May 2025 00:54:01 +0000 (0:00:03.988) 0:00:31.018 ************ 2025-05-04 00:57:00.614640 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.614655 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:57:00.614669 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:57:00.614683 | orchestrator | 2025-05-04 00:57:00.614697 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-04 00:57:00.614712 | orchestrator | Sunday 04 May 2025 00:54:02 +0000 (0:00:01.054) 0:00:32.072 ************ 2025-05-04 00:57:00.614726 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.614741 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.614755 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.614800 | orchestrator | 2025-05-04 00:57:00.614816 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-04 00:57:00.614830 | orchestrator | Sunday 04 May 2025 00:54:02 +0000 (0:00:00.369) 0:00:32.442 ************ 2025-05-04 00:57:00.614845 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.614859 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.614873 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.614887 | orchestrator | 2025-05-04 00:57:00.614901 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-04 00:57:00.614922 | orchestrator | Sunday 04 May 2025 00:54:03 +0000 (0:00:00.392) 0:00:32.835 ************ 2025-05-04 00:57:00.614937 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-04 00:57:00.614953 | orchestrator | ...ignoring 2025-05-04 00:57:00.614967 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-04 00:57:00.614981 | orchestrator | ...ignoring 2025-05-04 00:57:00.614995 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-04 00:57:00.615009 | orchestrator | ...ignoring 2025-05-04 00:57:00.615024 | orchestrator | 2025-05-04 00:57:00.615038 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-04 00:57:00.615052 | orchestrator | Sunday 04 May 2025 00:54:14 +0000 (0:00:10.844) 0:00:43.679 ************ 2025-05-04 00:57:00.615066 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.615080 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.615095 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.615109 | orchestrator | 2025-05-04 00:57:00.615123 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-04 00:57:00.615137 | orchestrator | Sunday 04 May 2025 00:54:15 +0000 (0:00:00.850) 0:00:44.530 ************ 2025-05-04 00:57:00.615151 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.615166 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.615180 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.615194 | orchestrator | 2025-05-04 00:57:00.615213 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-04 00:57:00.615228 | orchestrator | Sunday 04 May 2025 00:54:15 +0000 (0:00:00.844) 0:00:45.374 ************ 2025-05-04 00:57:00.615242 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.615257 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.615271 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.615285 | orchestrator | 2025-05-04 00:57:00.615306 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-04 00:57:00.615320 | orchestrator | Sunday 04 May 2025 00:54:16 +0000 (0:00:00.461) 0:00:45.836 ************ 2025-05-04 00:57:00.615335 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.615349 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.615363 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.615377 | orchestrator | 2025-05-04 00:57:00.615391 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-04 00:57:00.615406 | orchestrator | Sunday 04 May 2025 00:54:17 +0000 (0:00:00.690) 0:00:46.527 ************ 2025-05-04 00:57:00.615419 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.615434 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.615448 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.615462 | orchestrator | 2025-05-04 00:57:00.615476 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-04 00:57:00.615490 | orchestrator | Sunday 04 May 2025 00:54:17 +0000 (0:00:00.577) 0:00:47.105 ************ 2025-05-04 00:57:00.615505 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.615519 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.615533 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.615547 | orchestrator | 2025-05-04 00:57:00.615561 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-04 00:57:00.615575 | orchestrator | Sunday 04 May 2025 00:54:18 +0000 (0:00:00.624) 0:00:47.729 ************ 2025-05-04 00:57:00.615589 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.615604 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.615617 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-04 00:57:00.615638 | orchestrator | 2025-05-04 00:57:00.615652 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-04 00:57:00.615667 | orchestrator | Sunday 04 May 2025 00:54:18 +0000 (0:00:00.530) 0:00:48.259 ************ 2025-05-04 00:57:00.615681 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.615695 | orchestrator | 2025-05-04 00:57:00.615709 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-04 00:57:00.615723 | orchestrator | Sunday 04 May 2025 00:54:29 +0000 (0:00:10.902) 0:00:59.162 ************ 2025-05-04 00:57:00.615738 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.615752 | orchestrator | 2025-05-04 00:57:00.615806 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-04 00:57:00.615823 | orchestrator | Sunday 04 May 2025 00:54:29 +0000 (0:00:00.207) 0:00:59.370 ************ 2025-05-04 00:57:00.615838 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.615852 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.615866 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.615879 | orchestrator | 2025-05-04 00:57:00.615894 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-04 00:57:00.615907 | orchestrator | Sunday 04 May 2025 00:54:31 +0000 (0:00:01.175) 0:01:00.545 ************ 2025-05-04 00:57:00.615921 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.615935 | orchestrator | 2025-05-04 00:57:00.615949 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-04 00:57:00.615963 | orchestrator | Sunday 04 May 2025 00:54:41 +0000 (0:00:09.977) 0:01:10.523 ************ 2025-05-04 00:57:00.615978 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-04 00:57:00.615992 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.616006 | orchestrator | 2025-05-04 00:57:00.616020 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-04 00:57:00.616034 | orchestrator | Sunday 04 May 2025 00:54:48 +0000 (0:00:07.197) 0:01:17.720 ************ 2025-05-04 00:57:00.616049 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.616062 | orchestrator | 2025-05-04 00:57:00.616077 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-04 00:57:00.616091 | orchestrator | Sunday 04 May 2025 00:54:50 +0000 (0:00:02.655) 0:01:20.376 ************ 2025-05-04 00:57:00.616105 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.616119 | orchestrator | 2025-05-04 00:57:00.616133 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-04 00:57:00.616147 | orchestrator | Sunday 04 May 2025 00:54:51 +0000 (0:00:00.121) 0:01:20.498 ************ 2025-05-04 00:57:00.616168 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.616187 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.616202 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.616215 | orchestrator | 2025-05-04 00:57:00.616229 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-04 00:57:00.616243 | orchestrator | Sunday 04 May 2025 00:54:51 +0000 (0:00:00.475) 0:01:20.973 ************ 2025-05-04 00:57:00.616258 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.616272 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:57:00.616286 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:57:00.616313 | orchestrator | 2025-05-04 00:57:00.616328 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-04 00:57:00.616342 | orchestrator | Sunday 04 May 2025 00:54:51 +0000 (0:00:00.477) 0:01:21.451 ************ 2025-05-04 00:57:00.616356 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-04 00:57:00.616369 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.616383 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:57:00.616398 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:57:00.616411 | orchestrator | 2025-05-04 00:57:00.616431 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-04 00:57:00.616445 | orchestrator | skipping: no hosts matched 2025-05-04 00:57:00.616465 | orchestrator | 2025-05-04 00:57:00.616480 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-04 00:57:00.616494 | orchestrator | 2025-05-04 00:57:00.616509 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-04 00:57:00.616523 | orchestrator | Sunday 04 May 2025 00:55:11 +0000 (0:00:19.810) 0:01:41.262 ************ 2025-05-04 00:57:00.616537 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:57:00.616551 | orchestrator | 2025-05-04 00:57:00.616571 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-04 00:57:00.616586 | orchestrator | Sunday 04 May 2025 00:55:30 +0000 (0:00:18.409) 0:01:59.671 ************ 2025-05-04 00:57:00.616600 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.616614 | orchestrator | 2025-05-04 00:57:00.616629 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-04 00:57:00.616642 | orchestrator | Sunday 04 May 2025 00:55:45 +0000 (0:00:15.553) 0:02:15.225 ************ 2025-05-04 00:57:00.616657 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.616671 | orchestrator | 2025-05-04 00:57:00.616684 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-04 00:57:00.616699 | orchestrator | 2025-05-04 00:57:00.616713 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-04 00:57:00.616727 | orchestrator | Sunday 04 May 2025 00:55:48 +0000 (0:00:02.871) 0:02:18.096 ************ 2025-05-04 00:57:00.616741 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:57:00.616755 | orchestrator | 2025-05-04 00:57:00.616789 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-04 00:57:00.616804 | orchestrator | Sunday 04 May 2025 00:56:09 +0000 (0:00:20.569) 0:02:38.666 ************ 2025-05-04 00:57:00.616818 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.616832 | orchestrator | 2025-05-04 00:57:00.616847 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-04 00:57:00.616861 | orchestrator | Sunday 04 May 2025 00:56:24 +0000 (0:00:15.564) 0:02:54.230 ************ 2025-05-04 00:57:00.616875 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.616889 | orchestrator | 2025-05-04 00:57:00.616903 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-04 00:57:00.616917 | orchestrator | 2025-05-04 00:57:00.616932 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-04 00:57:00.616946 | orchestrator | Sunday 04 May 2025 00:56:27 +0000 (0:00:02.996) 0:02:57.226 ************ 2025-05-04 00:57:00.616960 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.616975 | orchestrator | 2025-05-04 00:57:00.616989 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-04 00:57:00.617003 | orchestrator | Sunday 04 May 2025 00:56:38 +0000 (0:00:11.234) 0:03:08.460 ************ 2025-05-04 00:57:00.617017 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.617031 | orchestrator | 2025-05-04 00:57:00.617045 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-04 00:57:00.617060 | orchestrator | Sunday 04 May 2025 00:56:43 +0000 (0:00:04.512) 0:03:12.973 ************ 2025-05-04 00:57:00.617074 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.617088 | orchestrator | 2025-05-04 00:57:00.617102 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-04 00:57:00.617116 | orchestrator | 2025-05-04 00:57:00.617130 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-04 00:57:00.617144 | orchestrator | Sunday 04 May 2025 00:56:46 +0000 (0:00:03.014) 0:03:15.988 ************ 2025-05-04 00:57:00.617159 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:57:00.617172 | orchestrator | 2025-05-04 00:57:00.617186 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-04 00:57:00.617201 | orchestrator | Sunday 04 May 2025 00:56:47 +0000 (0:00:00.644) 0:03:16.633 ************ 2025-05-04 00:57:00.617215 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.617236 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.617251 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.617265 | orchestrator | 2025-05-04 00:57:00.617279 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-04 00:57:00.617293 | orchestrator | Sunday 04 May 2025 00:56:49 +0000 (0:00:02.613) 0:03:19.246 ************ 2025-05-04 00:57:00.617307 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.617321 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.617335 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.617349 | orchestrator | 2025-05-04 00:57:00.617364 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-04 00:57:00.617378 | orchestrator | Sunday 04 May 2025 00:56:51 +0000 (0:00:02.031) 0:03:21.278 ************ 2025-05-04 00:57:00.617392 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.617406 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.617420 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.617434 | orchestrator | 2025-05-04 00:57:00.617453 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-04 00:57:00.617468 | orchestrator | Sunday 04 May 2025 00:56:54 +0000 (0:00:02.225) 0:03:23.503 ************ 2025-05-04 00:57:00.617482 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.617497 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.617510 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:57:00.617524 | orchestrator | 2025-05-04 00:57:00.617538 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-04 00:57:00.617552 | orchestrator | Sunday 04 May 2025 00:56:56 +0000 (0:00:02.147) 0:03:25.651 ************ 2025-05-04 00:57:00.617566 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:57:00.617580 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:57:00.617595 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:57:00.617608 | orchestrator | 2025-05-04 00:57:00.617622 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-04 00:57:00.617636 | orchestrator | Sunday 04 May 2025 00:56:59 +0000 (0:00:03.614) 0:03:29.265 ************ 2025-05-04 00:57:00.617651 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:57:00.617665 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:57:00.617679 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:57:00.617693 | orchestrator | 2025-05-04 00:57:00.617707 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:57:00.617721 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-04 00:57:00.617736 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-04 00:57:00.617758 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-04 00:57:03.678917 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-04 00:57:03.680092 | orchestrator | 2025-05-04 00:57:03.680151 | orchestrator | 2025-05-04 00:57:03.680179 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:57:03.680206 | orchestrator | Sunday 04 May 2025 00:57:00 +0000 (0:00:00.382) 0:03:29.648 ************ 2025-05-04 00:57:03.680222 | orchestrator | =============================================================================== 2025-05-04 00:57:03.680236 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.98s 2025-05-04 00:57:03.680250 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.12s 2025-05-04 00:57:03.680265 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 19.81s 2025-05-04 00:57:03.680279 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.23s 2025-05-04 00:57:03.680334 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.90s 2025-05-04 00:57:03.680360 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.84s 2025-05-04 00:57:03.680384 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.98s 2025-05-04 00:57:03.680402 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.37s 2025-05-04 00:57:03.680416 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.20s 2025-05-04 00:57:03.680431 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.87s 2025-05-04 00:57:03.680563 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.90s 2025-05-04 00:57:03.680578 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.83s 2025-05-04 00:57:03.680600 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.51s 2025-05-04 00:57:03.680626 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.99s 2025-05-04 00:57:03.680652 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.61s 2025-05-04 00:57:03.680677 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.01s 2025-05-04 00:57:03.680702 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.66s 2025-05-04 00:57:03.680727 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.61s 2025-05-04 00:57:03.680749 | orchestrator | Check MariaDB service --------------------------------------------------- 2.56s 2025-05-04 00:57:03.680796 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.23s 2025-05-04 00:57:03.680819 | orchestrator | 2025-05-04 00:57:00 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:03.680835 | orchestrator | 2025-05-04 00:57:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:03.680850 | orchestrator | 2025-05-04 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:03.680905 | orchestrator | 2025-05-04 00:57:03 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:06.721511 | orchestrator | 2025-05-04 00:57:03 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:06.721657 | orchestrator | 2025-05-04 00:57:03 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:06.721678 | orchestrator | 2025-05-04 00:57:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:06.721720 | orchestrator | 2025-05-04 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:06.721755 | orchestrator | 2025-05-04 00:57:06 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:06.722622 | orchestrator | 2025-05-04 00:57:06 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:06.723006 | orchestrator | 2025-05-04 00:57:06 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:06.723040 | orchestrator | 2025-05-04 00:57:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:09.765743 | orchestrator | 2025-05-04 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:09.765929 | orchestrator | 2025-05-04 00:57:09 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:09.766376 | orchestrator | 2025-05-04 00:57:09 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:09.767189 | orchestrator | 2025-05-04 00:57:09 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:09.768268 | orchestrator | 2025-05-04 00:57:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:12.800945 | orchestrator | 2025-05-04 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:12.801128 | orchestrator | 2025-05-04 00:57:12 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:12.802015 | orchestrator | 2025-05-04 00:57:12 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:12.803179 | orchestrator | 2025-05-04 00:57:12 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:12.804614 | orchestrator | 2025-05-04 00:57:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:12.805004 | orchestrator | 2025-05-04 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:15.847003 | orchestrator | 2025-05-04 00:57:15 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:15.850387 | orchestrator | 2025-05-04 00:57:15 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:15.852880 | orchestrator | 2025-05-04 00:57:15 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:15.853631 | orchestrator | 2025-05-04 00:57:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:18.889523 | orchestrator | 2025-05-04 00:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:18.889657 | orchestrator | 2025-05-04 00:57:18 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:18.891229 | orchestrator | 2025-05-04 00:57:18 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:21.929477 | orchestrator | 2025-05-04 00:57:18 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:21.929589 | orchestrator | 2025-05-04 00:57:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:21.929612 | orchestrator | 2025-05-04 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:21.929646 | orchestrator | 2025-05-04 00:57:21 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:21.930093 | orchestrator | 2025-05-04 00:57:21 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:21.930832 | orchestrator | 2025-05-04 00:57:21 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:21.931640 | orchestrator | 2025-05-04 00:57:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:24.976396 | orchestrator | 2025-05-04 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:24.976537 | orchestrator | 2025-05-04 00:57:24 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:24.977276 | orchestrator | 2025-05-04 00:57:24 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:24.981255 | orchestrator | 2025-05-04 00:57:24 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:24.984794 | orchestrator | 2025-05-04 00:57:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:28.039174 | orchestrator | 2025-05-04 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:28.039307 | orchestrator | 2025-05-04 00:57:28 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:31.077579 | orchestrator | 2025-05-04 00:57:28 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:31.077747 | orchestrator | 2025-05-04 00:57:28 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:31.077831 | orchestrator | 2025-05-04 00:57:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:31.077858 | orchestrator | 2025-05-04 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:31.077893 | orchestrator | 2025-05-04 00:57:31 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:31.078217 | orchestrator | 2025-05-04 00:57:31 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:31.079182 | orchestrator | 2025-05-04 00:57:31 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:31.080026 | orchestrator | 2025-05-04 00:57:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:34.129356 | orchestrator | 2025-05-04 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:34.129647 | orchestrator | 2025-05-04 00:57:34 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:34.131100 | orchestrator | 2025-05-04 00:57:34 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:34.132098 | orchestrator | 2025-05-04 00:57:34 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:34.134114 | orchestrator | 2025-05-04 00:57:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:37.161942 | orchestrator | 2025-05-04 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:37.162122 | orchestrator | 2025-05-04 00:57:37 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:37.162692 | orchestrator | 2025-05-04 00:57:37 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:37.162728 | orchestrator | 2025-05-04 00:57:37 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:37.163478 | orchestrator | 2025-05-04 00:57:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:37.163588 | orchestrator | 2025-05-04 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:40.203278 | orchestrator | 2025-05-04 00:57:40 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:40.207058 | orchestrator | 2025-05-04 00:57:40 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:40.210520 | orchestrator | 2025-05-04 00:57:40 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:40.214656 | orchestrator | 2025-05-04 00:57:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:43.262430 | orchestrator | 2025-05-04 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:43.262548 | orchestrator | 2025-05-04 00:57:43 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:43.263286 | orchestrator | 2025-05-04 00:57:43 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:43.264993 | orchestrator | 2025-05-04 00:57:43 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:43.265904 | orchestrator | 2025-05-04 00:57:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:43.267560 | orchestrator | 2025-05-04 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:46.320548 | orchestrator | 2025-05-04 00:57:46 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:46.321612 | orchestrator | 2025-05-04 00:57:46 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:46.323667 | orchestrator | 2025-05-04 00:57:46 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:46.325149 | orchestrator | 2025-05-04 00:57:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:46.325454 | orchestrator | 2025-05-04 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:49.381752 | orchestrator | 2025-05-04 00:57:49 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:49.383485 | orchestrator | 2025-05-04 00:57:49 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:49.383536 | orchestrator | 2025-05-04 00:57:49 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:49.384656 | orchestrator | 2025-05-04 00:57:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:52.431606 | orchestrator | 2025-05-04 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:52.431809 | orchestrator | 2025-05-04 00:57:52 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:52.434605 | orchestrator | 2025-05-04 00:57:52 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:52.436792 | orchestrator | 2025-05-04 00:57:52 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:52.438224 | orchestrator | 2025-05-04 00:57:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:52.438981 | orchestrator | 2025-05-04 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:55.494401 | orchestrator | 2025-05-04 00:57:55 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:55.495138 | orchestrator | 2025-05-04 00:57:55 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:55.495183 | orchestrator | 2025-05-04 00:57:55 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:55.496034 | orchestrator | 2025-05-04 00:57:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:57:58.550596 | orchestrator | 2025-05-04 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:57:58.550825 | orchestrator | 2025-05-04 00:57:58 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:57:58.551229 | orchestrator | 2025-05-04 00:57:58 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:57:58.551995 | orchestrator | 2025-05-04 00:57:58 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:57:58.553605 | orchestrator | 2025-05-04 00:57:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:01.608273 | orchestrator | 2025-05-04 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:01.608409 | orchestrator | 2025-05-04 00:58:01 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:01.610294 | orchestrator | 2025-05-04 00:58:01 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:01.612384 | orchestrator | 2025-05-04 00:58:01 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:01.614108 | orchestrator | 2025-05-04 00:58:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:04.668997 | orchestrator | 2025-05-04 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:04.669145 | orchestrator | 2025-05-04 00:58:04 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:04.671077 | orchestrator | 2025-05-04 00:58:04 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:04.673153 | orchestrator | 2025-05-04 00:58:04 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:04.675060 | orchestrator | 2025-05-04 00:58:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:07.726360 | orchestrator | 2025-05-04 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:07.726505 | orchestrator | 2025-05-04 00:58:07 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:07.728277 | orchestrator | 2025-05-04 00:58:07 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:07.730464 | orchestrator | 2025-05-04 00:58:07 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:07.732089 | orchestrator | 2025-05-04 00:58:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:07.732314 | orchestrator | 2025-05-04 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:10.784041 | orchestrator | 2025-05-04 00:58:10 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:10.785524 | orchestrator | 2025-05-04 00:58:10 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:10.787341 | orchestrator | 2025-05-04 00:58:10 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:10.789078 | orchestrator | 2025-05-04 00:58:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:13.840647 | orchestrator | 2025-05-04 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:13.840855 | orchestrator | 2025-05-04 00:58:13 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:13.842096 | orchestrator | 2025-05-04 00:58:13 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:13.844490 | orchestrator | 2025-05-04 00:58:13 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:13.846219 | orchestrator | 2025-05-04 00:58:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:16.905122 | orchestrator | 2025-05-04 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:16.905280 | orchestrator | 2025-05-04 00:58:16 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:16.905877 | orchestrator | 2025-05-04 00:58:16 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:16.907383 | orchestrator | 2025-05-04 00:58:16 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:16.909222 | orchestrator | 2025-05-04 00:58:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:19.965418 | orchestrator | 2025-05-04 00:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:19.965604 | orchestrator | 2025-05-04 00:58:19 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:19.966217 | orchestrator | 2025-05-04 00:58:19 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:19.967860 | orchestrator | 2025-05-04 00:58:19 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:19.969366 | orchestrator | 2025-05-04 00:58:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:19.969646 | orchestrator | 2025-05-04 00:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:23.025077 | orchestrator | 2025-05-04 00:58:23 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:23.028545 | orchestrator | 2025-05-04 00:58:23 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:23.029821 | orchestrator | 2025-05-04 00:58:23 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:23.031887 | orchestrator | 2025-05-04 00:58:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:26.091108 | orchestrator | 2025-05-04 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:26.091255 | orchestrator | 2025-05-04 00:58:26 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:26.092683 | orchestrator | 2025-05-04 00:58:26 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:26.095124 | orchestrator | 2025-05-04 00:58:26 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:26.096430 | orchestrator | 2025-05-04 00:58:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:29.152239 | orchestrator | 2025-05-04 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:29.152498 | orchestrator | 2025-05-04 00:58:29 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:29.154570 | orchestrator | 2025-05-04 00:58:29 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:29.156584 | orchestrator | 2025-05-04 00:58:29 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:29.158232 | orchestrator | 2025-05-04 00:58:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:32.204849 | orchestrator | 2025-05-04 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:32.205001 | orchestrator | 2025-05-04 00:58:32 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:32.205602 | orchestrator | 2025-05-04 00:58:32 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:32.207166 | orchestrator | 2025-05-04 00:58:32 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:32.207814 | orchestrator | 2025-05-04 00:58:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:32.208145 | orchestrator | 2025-05-04 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:35.269706 | orchestrator | 2025-05-04 00:58:35 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:35.274193 | orchestrator | 2025-05-04 00:58:35 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state STARTED 2025-05-04 00:58:35.275299 | orchestrator | 2025-05-04 00:58:35 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:35.278267 | orchestrator | 2025-05-04 00:58:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:38.324074 | orchestrator | 2025-05-04 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:38.324253 | orchestrator | 2025-05-04 00:58:38 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:38.325707 | orchestrator | 2025-05-04 00:58:38 | INFO  | Task 643e9549-f345-42c2-ae89-1c1df0db0524 is in state SUCCESS 2025-05-04 00:58:38.328419 | orchestrator | 2025-05-04 00:58:38.328480 | orchestrator | 2025-05-04 00:58:38.328502 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:58:38.328522 | orchestrator | 2025-05-04 00:58:38.328542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:58:38.328584 | orchestrator | Sunday 04 May 2025 00:57:03 +0000 (0:00:00.299) 0:00:00.299 ************ 2025-05-04 00:58:38.328604 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.328625 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.328646 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.328667 | orchestrator | 2025-05-04 00:58:38.328688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:58:38.328708 | orchestrator | Sunday 04 May 2025 00:57:04 +0000 (0:00:00.471) 0:00:00.771 ************ 2025-05-04 00:58:38.328728 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-04 00:58:38.328748 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-04 00:58:38.328791 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-04 00:58:38.328811 | orchestrator | 2025-05-04 00:58:38.328832 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-04 00:58:38.328853 | orchestrator | 2025-05-04 00:58:38.328873 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-04 00:58:38.328894 | orchestrator | Sunday 04 May 2025 00:57:04 +0000 (0:00:00.363) 0:00:01.134 ************ 2025-05-04 00:58:38.328914 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:58:38.328936 | orchestrator | 2025-05-04 00:58:38.328957 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-04 00:58:38.328977 | orchestrator | Sunday 04 May 2025 00:57:05 +0000 (0:00:00.836) 0:00:01.971 ************ 2025-05-04 00:58:38.329005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.329074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.329100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.329140 | orchestrator | 2025-05-04 00:58:38.329161 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-04 00:58:38.329183 | orchestrator | Sunday 04 May 2025 00:57:07 +0000 (0:00:02.224) 0:00:04.195 ************ 2025-05-04 00:58:38.329205 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.329227 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.329247 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.329269 | orchestrator | 2025-05-04 00:58:38.329291 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-04 00:58:38.329313 | orchestrator | Sunday 04 May 2025 00:57:07 +0000 (0:00:00.305) 0:00:04.500 ************ 2025-05-04 00:58:38.329341 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-04 00:58:38.329364 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-04 00:58:38.329382 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-04 00:58:38.329398 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-04 00:58:38.329416 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-04 00:58:38.329432 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-04 00:58:38.329449 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-04 00:58:38.329466 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-04 00:58:38.329483 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-04 00:58:38.329500 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-04 00:58:38.329516 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-04 00:58:38.329532 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-04 00:58:38.329548 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-04 00:58:38.329565 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-04 00:58:38.329582 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-04 00:58:38.329599 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-04 00:58:38.329615 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-04 00:58:38.329632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-04 00:58:38.329649 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-04 00:58:38.329670 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-04 00:58:38.329688 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-04 00:58:38.329705 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-04 00:58:38.329728 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-04 00:58:38.329752 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-04 00:58:38.329803 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-04 00:58:38.329820 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-04 00:58:38.329839 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-04 00:58:38.329857 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-04 00:58:38.329874 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-04 00:58:38.329891 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-04 00:58:38.329908 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-04 00:58:38.329925 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-04 00:58:38.329943 | orchestrator | 2025-05-04 00:58:38.329960 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.329978 | orchestrator | Sunday 04 May 2025 00:57:09 +0000 (0:00:01.378) 0:00:05.879 ************ 2025-05-04 00:58:38.329995 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.330012 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.330082 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.330101 | orchestrator | 2025-05-04 00:58:38.330118 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.330135 | orchestrator | Sunday 04 May 2025 00:57:09 +0000 (0:00:00.511) 0:00:06.391 ************ 2025-05-04 00:58:38.330153 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.330201 | orchestrator | 2025-05-04 00:58:38.330226 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.330242 | orchestrator | Sunday 04 May 2025 00:57:09 +0000 (0:00:00.127) 0:00:06.518 ************ 2025-05-04 00:58:38.330259 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.330275 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.330291 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.330307 | orchestrator | 2025-05-04 00:58:38.330323 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.330340 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.460) 0:00:06.979 ************ 2025-05-04 00:58:38.330356 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.330373 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.330389 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.330406 | orchestrator | 2025-05-04 00:58:38.330423 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.330439 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.310) 0:00:07.289 ************ 2025-05-04 00:58:38.330456 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.330473 | orchestrator | 2025-05-04 00:58:38.330489 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.330506 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.292) 0:00:07.582 ************ 2025-05-04 00:58:38.330523 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.330539 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.330568 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.330591 | orchestrator | 2025-05-04 00:58:38.330607 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.330624 | orchestrator | Sunday 04 May 2025 00:57:11 +0000 (0:00:00.369) 0:00:07.951 ************ 2025-05-04 00:58:38.330640 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.330658 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.330674 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.330690 | orchestrator | 2025-05-04 00:58:38.330707 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.330723 | orchestrator | Sunday 04 May 2025 00:57:11 +0000 (0:00:00.472) 0:00:08.424 ************ 2025-05-04 00:58:38.330739 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.330756 | orchestrator | 2025-05-04 00:58:38.330820 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.330838 | orchestrator | Sunday 04 May 2025 00:57:11 +0000 (0:00:00.141) 0:00:08.566 ************ 2025-05-04 00:58:38.330855 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.330872 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.330889 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.330906 | orchestrator | 2025-05-04 00:58:38.330922 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.330939 | orchestrator | Sunday 04 May 2025 00:57:12 +0000 (0:00:00.414) 0:00:08.980 ************ 2025-05-04 00:58:38.330955 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.330972 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.330990 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.331007 | orchestrator | 2025-05-04 00:58:38.331022 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.331039 | orchestrator | Sunday 04 May 2025 00:57:12 +0000 (0:00:00.463) 0:00:09.444 ************ 2025-05-04 00:58:38.331057 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331074 | orchestrator | 2025-05-04 00:58:38.331092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.331108 | orchestrator | Sunday 04 May 2025 00:57:12 +0000 (0:00:00.133) 0:00:09.577 ************ 2025-05-04 00:58:38.331126 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331143 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.331160 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.331177 | orchestrator | 2025-05-04 00:58:38.331195 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.331212 | orchestrator | Sunday 04 May 2025 00:57:13 +0000 (0:00:00.445) 0:00:10.023 ************ 2025-05-04 00:58:38.331228 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.331246 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.331263 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.331280 | orchestrator | 2025-05-04 00:58:38.331297 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.331314 | orchestrator | Sunday 04 May 2025 00:57:13 +0000 (0:00:00.339) 0:00:10.362 ************ 2025-05-04 00:58:38.331331 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331348 | orchestrator | 2025-05-04 00:58:38.331365 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.331383 | orchestrator | Sunday 04 May 2025 00:57:13 +0000 (0:00:00.246) 0:00:10.609 ************ 2025-05-04 00:58:38.331394 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331405 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.331419 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.331436 | orchestrator | 2025-05-04 00:58:38.331463 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.331482 | orchestrator | Sunday 04 May 2025 00:57:14 +0000 (0:00:00.322) 0:00:10.932 ************ 2025-05-04 00:58:38.331499 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.331517 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.331533 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.331562 | orchestrator | 2025-05-04 00:58:38.331579 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.331594 | orchestrator | Sunday 04 May 2025 00:57:14 +0000 (0:00:00.713) 0:00:11.645 ************ 2025-05-04 00:58:38.331612 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331629 | orchestrator | 2025-05-04 00:58:38.331646 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.331663 | orchestrator | Sunday 04 May 2025 00:57:15 +0000 (0:00:00.123) 0:00:11.768 ************ 2025-05-04 00:58:38.331680 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331697 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.331715 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.331732 | orchestrator | 2025-05-04 00:58:38.331749 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.331788 | orchestrator | Sunday 04 May 2025 00:57:15 +0000 (0:00:00.702) 0:00:12.471 ************ 2025-05-04 00:58:38.331820 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.331839 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.331856 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.331873 | orchestrator | 2025-05-04 00:58:38.331889 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.331904 | orchestrator | Sunday 04 May 2025 00:57:16 +0000 (0:00:00.615) 0:00:13.087 ************ 2025-05-04 00:58:38.331918 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331933 | orchestrator | 2025-05-04 00:58:38.331947 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.331961 | orchestrator | Sunday 04 May 2025 00:57:16 +0000 (0:00:00.117) 0:00:13.204 ************ 2025-05-04 00:58:38.331976 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.331990 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.332004 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.332018 | orchestrator | 2025-05-04 00:58:38.332032 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.332046 | orchestrator | Sunday 04 May 2025 00:57:16 +0000 (0:00:00.418) 0:00:13.623 ************ 2025-05-04 00:58:38.332060 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.332074 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.332088 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.332102 | orchestrator | 2025-05-04 00:58:38.332116 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.332130 | orchestrator | Sunday 04 May 2025 00:57:17 +0000 (0:00:00.563) 0:00:14.186 ************ 2025-05-04 00:58:38.332144 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.332158 | orchestrator | 2025-05-04 00:58:38.332173 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.332187 | orchestrator | Sunday 04 May 2025 00:57:17 +0000 (0:00:00.146) 0:00:14.333 ************ 2025-05-04 00:58:38.332202 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.332216 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.332230 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.332379 | orchestrator | 2025-05-04 00:58:38.332398 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.332412 | orchestrator | Sunday 04 May 2025 00:57:17 +0000 (0:00:00.350) 0:00:14.683 ************ 2025-05-04 00:58:38.332426 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.332441 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.332455 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.332470 | orchestrator | 2025-05-04 00:58:38.332483 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.332498 | orchestrator | Sunday 04 May 2025 00:57:18 +0000 (0:00:00.392) 0:00:15.076 ************ 2025-05-04 00:58:38.332512 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.332526 | orchestrator | 2025-05-04 00:58:38.332541 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.332555 | orchestrator | Sunday 04 May 2025 00:57:18 +0000 (0:00:00.095) 0:00:15.171 ************ 2025-05-04 00:58:38.332624 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.332641 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.332657 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.332672 | orchestrator | 2025-05-04 00:58:38.332682 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.332691 | orchestrator | Sunday 04 May 2025 00:57:18 +0000 (0:00:00.336) 0:00:15.508 ************ 2025-05-04 00:58:38.332700 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.332709 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.332718 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.332735 | orchestrator | 2025-05-04 00:58:38.332745 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.332753 | orchestrator | Sunday 04 May 2025 00:57:19 +0000 (0:00:00.404) 0:00:15.913 ************ 2025-05-04 00:58:38.332777 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.332787 | orchestrator | 2025-05-04 00:58:38.332796 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.332804 | orchestrator | Sunday 04 May 2025 00:57:19 +0000 (0:00:00.126) 0:00:16.039 ************ 2025-05-04 00:58:38.332813 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.332822 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.332833 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.332849 | orchestrator | 2025-05-04 00:58:38.332866 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-04 00:58:38.332882 | orchestrator | Sunday 04 May 2025 00:57:19 +0000 (0:00:00.339) 0:00:16.379 ************ 2025-05-04 00:58:38.332899 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:58:38.332915 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:58:38.332931 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:58:38.332947 | orchestrator | 2025-05-04 00:58:38.332968 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-04 00:58:38.332985 | orchestrator | Sunday 04 May 2025 00:57:19 +0000 (0:00:00.303) 0:00:16.682 ************ 2025-05-04 00:58:38.333002 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.333019 | orchestrator | 2025-05-04 00:58:38.333037 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-04 00:58:38.333053 | orchestrator | Sunday 04 May 2025 00:57:20 +0000 (0:00:00.098) 0:00:16.781 ************ 2025-05-04 00:58:38.333071 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.333088 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.333105 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.333121 | orchestrator | 2025-05-04 00:58:38.333138 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-04 00:58:38.333155 | orchestrator | Sunday 04 May 2025 00:57:20 +0000 (0:00:00.333) 0:00:17.114 ************ 2025-05-04 00:58:38.333172 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:58:38.333189 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:58:38.333207 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:58:38.333225 | orchestrator | 2025-05-04 00:58:38.333243 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-04 00:58:38.333260 | orchestrator | Sunday 04 May 2025 00:57:22 +0000 (0:00:02.500) 0:00:19.615 ************ 2025-05-04 00:58:38.333278 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-04 00:58:38.333310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-04 00:58:38.333326 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-04 00:58:38.333342 | orchestrator | 2025-05-04 00:58:38.333359 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-04 00:58:38.333377 | orchestrator | Sunday 04 May 2025 00:57:25 +0000 (0:00:02.140) 0:00:21.755 ************ 2025-05-04 00:58:38.333392 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-04 00:58:38.333421 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-04 00:58:38.333438 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-04 00:58:38.333455 | orchestrator | 2025-05-04 00:58:38.333471 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-04 00:58:38.333487 | orchestrator | Sunday 04 May 2025 00:57:27 +0000 (0:00:02.329) 0:00:24.085 ************ 2025-05-04 00:58:38.333503 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-04 00:58:38.333519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-04 00:58:38.333536 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-04 00:58:38.333551 | orchestrator | 2025-05-04 00:58:38.333567 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-04 00:58:38.333583 | orchestrator | Sunday 04 May 2025 00:57:29 +0000 (0:00:02.073) 0:00:26.159 ************ 2025-05-04 00:58:38.333598 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.333614 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.333629 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.333644 | orchestrator | 2025-05-04 00:58:38.333660 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-04 00:58:38.333676 | orchestrator | Sunday 04 May 2025 00:57:29 +0000 (0:00:00.266) 0:00:26.425 ************ 2025-05-04 00:58:38.333692 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.333707 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.333722 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.333736 | orchestrator | 2025-05-04 00:58:38.333746 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-04 00:58:38.333754 | orchestrator | Sunday 04 May 2025 00:57:30 +0000 (0:00:00.344) 0:00:26.769 ************ 2025-05-04 00:58:38.333816 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:58:38.333827 | orchestrator | 2025-05-04 00:58:38.333836 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-04 00:58:38.333845 | orchestrator | Sunday 04 May 2025 00:57:30 +0000 (0:00:00.565) 0:00:27.335 ************ 2025-05-04 00:58:38.333867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.333893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.333910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.333928 | orchestrator | 2025-05-04 00:58:38.333937 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-04 00:58:38.333946 | orchestrator | Sunday 04 May 2025 00:57:32 +0000 (0:00:01.636) 0:00:28.972 ************ 2025-05-04 00:58:38.333955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:58:38.333964 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.333980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:58:38.333998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:58:38.334007 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.334040 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.334050 | orchestrator | 2025-05-04 00:58:38.334061 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-04 00:58:38.334070 | orchestrator | Sunday 04 May 2025 00:57:33 +0000 (0:00:01.005) 0:00:29.978 ************ 2025-05-04 00:58:38.334091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:58:38.334105 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.334114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:58:38.334132 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.334148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-04 00:58:38.334161 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.334169 | orchestrator | 2025-05-04 00:58:38.334178 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-04 00:58:38.334187 | orchestrator | Sunday 04 May 2025 00:57:34 +0000 (0:00:01.111) 0:00:31.089 ************ 2025-05-04 00:58:38.334201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.334222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.334238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-04 00:58:38.334259 | orchestrator | 2025-05-04 00:58:38.334275 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-04 00:58:38.334290 | orchestrator | Sunday 04 May 2025 00:57:38 +0000 (0:00:04.072) 0:00:35.162 ************ 2025-05-04 00:58:38.334304 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:58:38.334318 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:58:38.334332 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:58:38.334346 | orchestrator | 2025-05-04 00:58:38.334361 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-04 00:58:38.334375 | orchestrator | Sunday 04 May 2025 00:57:38 +0000 (0:00:00.392) 0:00:35.555 ************ 2025-05-04 00:58:38.334389 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:58:38.334405 | orchestrator | 2025-05-04 00:58:38.334420 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-04 00:58:38.334434 | orchestrator | Sunday 04 May 2025 00:57:39 +0000 (0:00:00.552) 0:00:36.107 ************ 2025-05-04 00:58:38.334448 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:58:38.334463 | orchestrator | 2025-05-04 00:58:38.334484 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-04 00:58:38.334499 | orchestrator | Sunday 04 May 2025 00:57:41 +0000 (0:00:02.383) 0:00:38.491 ************ 2025-05-04 00:58:38.334514 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:58:38.334530 | orchestrator | 2025-05-04 00:58:38.334544 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-04 00:58:38.334559 | orchestrator | Sunday 04 May 2025 00:57:43 +0000 (0:00:02.157) 0:00:40.648 ************ 2025-05-04 00:58:38.334574 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:58:38.334588 | orchestrator | 2025-05-04 00:58:38.334602 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-04 00:58:38.334617 | orchestrator | Sunday 04 May 2025 00:57:57 +0000 (0:00:13.604) 0:00:54.252 ************ 2025-05-04 00:58:38.334633 | orchestrator | 2025-05-04 00:58:38.334647 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-04 00:58:38.334662 | orchestrator | Sunday 04 May 2025 00:57:57 +0000 (0:00:00.055) 0:00:54.307 ************ 2025-05-04 00:58:38.334677 | orchestrator | 2025-05-04 00:58:38.334692 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-04 00:58:38.334706 | orchestrator | Sunday 04 May 2025 00:57:57 +0000 (0:00:00.181) 0:00:54.489 ************ 2025-05-04 00:58:38.334721 | orchestrator | 2025-05-04 00:58:38.334735 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-04 00:58:38.334759 | orchestrator | Sunday 04 May 2025 00:57:57 +0000 (0:00:00.057) 0:00:54.546 ************ 2025-05-04 00:58:38.334799 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:58:38.334815 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:58:38.334829 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:58:38.334843 | orchestrator | 2025-05-04 00:58:38.334857 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:58:38.334872 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-04 00:58:38.334888 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-04 00:58:38.334897 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-04 00:58:38.334906 | orchestrator | 2025-05-04 00:58:38.334915 | orchestrator | 2025-05-04 00:58:38.334924 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:58:38.334932 | orchestrator | Sunday 04 May 2025 00:58:35 +0000 (0:00:38.068) 0:01:32.615 ************ 2025-05-04 00:58:38.334941 | orchestrator | =============================================================================== 2025-05-04 00:58:38.334950 | orchestrator | horizon : Restart horizon container ------------------------------------ 38.07s 2025-05-04 00:58:38.334959 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.60s 2025-05-04 00:58:38.334967 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.07s 2025-05-04 00:58:38.334976 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.50s 2025-05-04 00:58:38.334985 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.38s 2025-05-04 00:58:38.334994 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.33s 2025-05-04 00:58:38.335002 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 2.22s 2025-05-04 00:58:38.335011 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.16s 2025-05-04 00:58:38.335025 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.14s 2025-05-04 00:58:38.335040 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.07s 2025-05-04 00:58:38.335054 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.64s 2025-05-04 00:58:38.335069 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.38s 2025-05-04 00:58:38.335084 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.11s 2025-05-04 00:58:38.335108 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.01s 2025-05-04 00:58:41.372005 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2025-05-04 00:58:41.372170 | orchestrator | horizon : Update policy file name --------------------------------------- 0.71s 2025-05-04 00:58:41.372192 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.70s 2025-05-04 00:58:41.372207 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2025-05-04 00:58:41.372222 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-05-04 00:58:41.372236 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2025-05-04 00:58:41.372251 | orchestrator | 2025-05-04 00:58:38 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:41.372266 | orchestrator | 2025-05-04 00:58:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:41.372281 | orchestrator | 2025-05-04 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:41.372339 | orchestrator | 2025-05-04 00:58:41 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:41.374005 | orchestrator | 2025-05-04 00:58:41 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:41.376289 | orchestrator | 2025-05-04 00:58:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:44.424222 | orchestrator | 2025-05-04 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:44.424328 | orchestrator | 2025-05-04 00:58:44 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:44.425125 | orchestrator | 2025-05-04 00:58:44 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:44.426865 | orchestrator | 2025-05-04 00:58:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:47.486319 | orchestrator | 2025-05-04 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:47.486480 | orchestrator | 2025-05-04 00:58:47 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:47.486842 | orchestrator | 2025-05-04 00:58:47 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state STARTED 2025-05-04 00:58:47.489351 | orchestrator | 2025-05-04 00:58:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:50.558295 | orchestrator | 2025-05-04 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:50.558446 | orchestrator | 2025-05-04 00:58:50 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:50.558794 | orchestrator | 2025-05-04 00:58:50 | INFO  | Task 61575bf6-4e98-4221-bc83-f7df5029dc2d is in state SUCCESS 2025-05-04 00:58:50.560818 | orchestrator | 2025-05-04 00:58:50.560949 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-04 00:58:50.560971 | orchestrator | 2025-05-04 00:58:50.560986 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-04 00:58:50.561001 | orchestrator | 2025-05-04 00:58:50.561016 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-04 00:58:50.561031 | orchestrator | Sunday 04 May 2025 00:56:42 +0000 (0:00:01.093) 0:00:01.093 ************ 2025-05-04 00:58:50.561047 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:58:50.561063 | orchestrator | 2025-05-04 00:58:50.561077 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-04 00:58:50.561092 | orchestrator | Sunday 04 May 2025 00:56:43 +0000 (0:00:00.516) 0:00:01.610 ************ 2025-05-04 00:58:50.561107 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-04 00:58:50.561121 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-04 00:58:50.561372 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-04 00:58:50.561393 | orchestrator | 2025-05-04 00:58:50.561408 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-04 00:58:50.561423 | orchestrator | Sunday 04 May 2025 00:56:44 +0000 (0:00:00.874) 0:00:02.484 ************ 2025-05-04 00:58:50.561437 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:58:50.561452 | orchestrator | 2025-05-04 00:58:50.561466 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-04 00:58:50.561481 | orchestrator | Sunday 04 May 2025 00:56:44 +0000 (0:00:00.723) 0:00:03.207 ************ 2025-05-04 00:58:50.561495 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.561511 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.561525 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.561540 | orchestrator | 2025-05-04 00:58:50.561588 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-04 00:58:50.561603 | orchestrator | Sunday 04 May 2025 00:56:45 +0000 (0:00:00.643) 0:00:03.851 ************ 2025-05-04 00:58:50.561617 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.561632 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.561646 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.561660 | orchestrator | 2025-05-04 00:58:50.561675 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-04 00:58:50.561690 | orchestrator | Sunday 04 May 2025 00:56:45 +0000 (0:00:00.318) 0:00:04.169 ************ 2025-05-04 00:58:50.561790 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.561807 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.561822 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.561836 | orchestrator | 2025-05-04 00:58:50.561851 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-04 00:58:50.561865 | orchestrator | Sunday 04 May 2025 00:56:46 +0000 (0:00:00.829) 0:00:04.998 ************ 2025-05-04 00:58:50.561879 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.561894 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.561908 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.561922 | orchestrator | 2025-05-04 00:58:50.561936 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-04 00:58:50.561951 | orchestrator | Sunday 04 May 2025 00:56:47 +0000 (0:00:00.314) 0:00:05.313 ************ 2025-05-04 00:58:50.561965 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.561979 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.561995 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.562077 | orchestrator | 2025-05-04 00:58:50.562094 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-04 00:58:50.562111 | orchestrator | Sunday 04 May 2025 00:56:47 +0000 (0:00:00.332) 0:00:05.645 ************ 2025-05-04 00:58:50.562127 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.562143 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.562158 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.562174 | orchestrator | 2025-05-04 00:58:50.562190 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-04 00:58:50.562206 | orchestrator | Sunday 04 May 2025 00:56:47 +0000 (0:00:00.345) 0:00:05.990 ************ 2025-05-04 00:58:50.562222 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.562238 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.562254 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.562269 | orchestrator | 2025-05-04 00:58:50.562285 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-04 00:58:50.562300 | orchestrator | Sunday 04 May 2025 00:56:48 +0000 (0:00:00.536) 0:00:06.527 ************ 2025-05-04 00:58:50.562317 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.562333 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.562349 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.562363 | orchestrator | 2025-05-04 00:58:50.562377 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-04 00:58:50.562391 | orchestrator | Sunday 04 May 2025 00:56:48 +0000 (0:00:00.338) 0:00:06.866 ************ 2025-05-04 00:58:50.562405 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-04 00:58:50.562424 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:58:50.562439 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:58:50.562453 | orchestrator | 2025-05-04 00:58:50.562467 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-04 00:58:50.562481 | orchestrator | Sunday 04 May 2025 00:56:49 +0000 (0:00:00.699) 0:00:07.566 ************ 2025-05-04 00:58:50.562495 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.562509 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.562523 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.562537 | orchestrator | 2025-05-04 00:58:50.562562 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-04 00:58:50.562577 | orchestrator | Sunday 04 May 2025 00:56:49 +0000 (0:00:00.465) 0:00:08.031 ************ 2025-05-04 00:58:50.562602 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-04 00:58:50.562617 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:58:50.562631 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:58:50.562645 | orchestrator | 2025-05-04 00:58:50.562659 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-04 00:58:50.562674 | orchestrator | Sunday 04 May 2025 00:56:52 +0000 (0:00:02.380) 0:00:10.412 ************ 2025-05-04 00:58:50.562688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:58:50.562745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:58:50.562782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:58:50.562800 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.562815 | orchestrator | 2025-05-04 00:58:50.562829 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-04 00:58:50.562843 | orchestrator | Sunday 04 May 2025 00:56:52 +0000 (0:00:00.464) 0:00:10.876 ************ 2025-05-04 00:58:50.562859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-04 00:58:50.562876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-04 00:58:50.562891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-04 00:58:50.562906 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.562920 | orchestrator | 2025-05-04 00:58:50.562934 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-04 00:58:50.562949 | orchestrator | Sunday 04 May 2025 00:56:53 +0000 (0:00:00.660) 0:00:11.536 ************ 2025-05-04 00:58:50.562964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:58:50.562980 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:58:50.562995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:58:50.563009 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563023 | orchestrator | 2025-05-04 00:58:50.563038 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-04 00:58:50.563060 | orchestrator | Sunday 04 May 2025 00:56:53 +0000 (0:00:00.166) 0:00:11.702 ************ 2025-05-04 00:58:50.563078 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'b8af287fe1aa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-04 00:56:50.617516', 'end': '2025-05-04 00:56:50.666533', 'delta': '0:00:00.049017', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8af287fe1aa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-04 00:58:50.563110 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '975183f2321c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-04 00:56:51.232929', 'end': '2025-05-04 00:56:51.274815', 'delta': '0:00:00.041886', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['975183f2321c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-04 00:58:50.563127 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'ef264e3af733', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-04 00:56:51.768163', 'end': '2025-05-04 00:56:51.806226', 'delta': '0:00:00.038063', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ef264e3af733'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-04 00:58:50.563143 | orchestrator | 2025-05-04 00:58:50.563157 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-04 00:58:50.563171 | orchestrator | Sunday 04 May 2025 00:56:53 +0000 (0:00:00.216) 0:00:11.919 ************ 2025-05-04 00:58:50.563186 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.563200 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.563214 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.563229 | orchestrator | 2025-05-04 00:58:50.563243 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-04 00:58:50.563257 | orchestrator | Sunday 04 May 2025 00:56:54 +0000 (0:00:00.503) 0:00:12.422 ************ 2025-05-04 00:58:50.563272 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-04 00:58:50.563285 | orchestrator | 2025-05-04 00:58:50.563300 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-04 00:58:50.563314 | orchestrator | Sunday 04 May 2025 00:56:55 +0000 (0:00:01.359) 0:00:13.782 ************ 2025-05-04 00:58:50.563328 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563342 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.563357 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.563371 | orchestrator | 2025-05-04 00:58:50.563385 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-04 00:58:50.563399 | orchestrator | Sunday 04 May 2025 00:56:56 +0000 (0:00:00.528) 0:00:14.311 ************ 2025-05-04 00:58:50.563413 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563427 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.563441 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.563464 | orchestrator | 2025-05-04 00:58:50.563478 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-04 00:58:50.563493 | orchestrator | Sunday 04 May 2025 00:56:56 +0000 (0:00:00.462) 0:00:14.774 ************ 2025-05-04 00:58:50.563507 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563521 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.563535 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.563549 | orchestrator | 2025-05-04 00:58:50.563563 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-04 00:58:50.563578 | orchestrator | Sunday 04 May 2025 00:56:56 +0000 (0:00:00.313) 0:00:15.087 ************ 2025-05-04 00:58:50.563592 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.563606 | orchestrator | 2025-05-04 00:58:50.563620 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-04 00:58:50.563634 | orchestrator | Sunday 04 May 2025 00:56:56 +0000 (0:00:00.135) 0:00:15.222 ************ 2025-05-04 00:58:50.563648 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563662 | orchestrator | 2025-05-04 00:58:50.563676 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-04 00:58:50.563696 | orchestrator | Sunday 04 May 2025 00:56:57 +0000 (0:00:00.222) 0:00:15.445 ************ 2025-05-04 00:58:50.563711 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563725 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.563739 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.563753 | orchestrator | 2025-05-04 00:58:50.563832 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-04 00:58:50.563857 | orchestrator | Sunday 04 May 2025 00:56:57 +0000 (0:00:00.503) 0:00:15.949 ************ 2025-05-04 00:58:50.563872 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563887 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.563901 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.563915 | orchestrator | 2025-05-04 00:58:50.563929 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-04 00:58:50.563943 | orchestrator | Sunday 04 May 2025 00:56:58 +0000 (0:00:00.344) 0:00:16.294 ************ 2025-05-04 00:58:50.563957 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.563972 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.563986 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.564000 | orchestrator | 2025-05-04 00:58:50.564014 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-04 00:58:50.564028 | orchestrator | Sunday 04 May 2025 00:56:58 +0000 (0:00:00.400) 0:00:16.694 ************ 2025-05-04 00:58:50.564043 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.564057 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.564079 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.564094 | orchestrator | 2025-05-04 00:58:50.564108 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-04 00:58:50.564122 | orchestrator | Sunday 04 May 2025 00:56:58 +0000 (0:00:00.330) 0:00:17.025 ************ 2025-05-04 00:58:50.564137 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.564151 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.564165 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.564179 | orchestrator | 2025-05-04 00:58:50.564194 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-04 00:58:50.564208 | orchestrator | Sunday 04 May 2025 00:56:59 +0000 (0:00:00.584) 0:00:17.609 ************ 2025-05-04 00:58:50.564222 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.564236 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.564250 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.564264 | orchestrator | 2025-05-04 00:58:50.564278 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-04 00:58:50.564292 | orchestrator | Sunday 04 May 2025 00:56:59 +0000 (0:00:00.340) 0:00:17.950 ************ 2025-05-04 00:58:50.564306 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.564329 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.564344 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.564358 | orchestrator | 2025-05-04 00:58:50.564373 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-04 00:58:50.564387 | orchestrator | Sunday 04 May 2025 00:57:00 +0000 (0:00:00.341) 0:00:18.291 ************ 2025-05-04 00:58:50.564402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c91b3cb6--7edb--5452--ada6--d38ce882942b-osd--block--c91b3cb6--7edb--5452--ada6--d38ce882942b', 'dm-uuid-LVM-71pBT1pjpRJYqyJHxHhbblalssmM2V04sOpQmwNgI8Lt2BvclRbx5w9p6VWnirL1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdbd5a24--b46a--5ddb--91ef--7688b352f27d-osd--block--bdbd5a24--b46a--5ddb--91ef--7688b352f27d', 'dm-uuid-LVM-63jZAGyNOSPWdLsrlUqR77pKYvgdH1swvi4BtdgMuM7w9hwSDZSx73SyxoZdyEWt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03a186d7--e7a2--5e82--b5c3--d5631de29e6f-osd--block--03a186d7--e7a2--5e82--b5c3--d5631de29e6f', 'dm-uuid-LVM-H898Fax1Eiy3jXiJQBv4rJ9xtSDMmGSGpTY9UgsuLbXoAHWnC10rGrmezMIppAsI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5e087d3a--1c7d--5e62--b576--6c121f884fde-osd--block--5e087d3a--1c7d--5e62--b576--6c121f884fde', 'dm-uuid-LVM-1zdNh4CjG3AdEpVaRpghSqVud1VNP1C4IptBo6f0ecsGam9mjnq3e1LApYI4h9nG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_df83d2dc-f695-4d60-b23d-cf602fc737d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.564639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c91b3cb6--7edb--5452--ada6--d38ce882942b-osd--block--c91b3cb6--7edb--5452--ada6--d38ce882942b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BiBYTr-IYLc-SCFQ-Z6RX-MbnW-2cU4-9GxVRN', 'scsi-0QEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7', 'scsi-SQEMU_QEMU_HARDDISK_e986bc1a-3638-41fe-8757-5755b3d430d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.564685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bdbd5a24--b46a--5ddb--91ef--7688b352f27d-osd--block--bdbd5a24--b46a--5ddb--91ef--7688b352f27d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zV9mSs-EDEo-rgO4-kOfM-VSZk-7TEp-asrc0o', 'scsi-0QEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93', 'scsi-SQEMU_QEMU_HARDDISK_9737e10e-3051-48df-9cd6-5b074c161c93'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.564714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc', 'scsi-SQEMU_QEMU_HARDDISK_f0e304d0-da68-45fd-ab80-c7aa1a870cfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.564846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.564866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564881 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.564906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98453abf--c748--514f--aec7--544322a7c940-osd--block--98453abf--c748--514f--aec7--544322a7c940', 'dm-uuid-LVM-7XxLxh6qGXWFUIar0pkE6d5efe3ZgUEsKv0g1Pt2G6w5HENw6FKkde3bcDPpeSXa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.564965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f54bf35c--9381--504c--8591--afe4d3e61469-osd--block--f54bf35c--9381--504c--8591--afe4d3e61469', 'dm-uuid-LVM-u7IJocf2PGV2a4kCLt3D5MBNkxQ6mVWJXRJqebLygXn6LkGRpzxE39be6s4PnQoh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f13b3c3-7c4b-43f0-82a2-018e2f1c6b47-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--03a186d7--e7a2--5e82--b5c3--d5631de29e6f-osd--block--03a186d7--e7a2--5e82--b5c3--d5631de29e6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2PuX0W-zM0h-IVnU-IADS-EXvd-Uvmr-f02hhG', 'scsi-0QEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef', 'scsi-SQEMU_QEMU_HARDDISK_5892b7dc-a458-477e-893f-beef3eb00cef'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5e087d3a--1c7d--5e62--b576--6c121f884fde-osd--block--5e087d3a--1c7d--5e62--b576--6c121f884fde'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2JkK3E-6SRE-OITx-acql-vGfX-hhz1-GUpcK7', 'scsi-0QEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254', 'scsi-SQEMU_QEMU_HARDDISK_fce9c480-0ce5-4d2c-b3f0-14cdf3862254'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f', 'scsi-SQEMU_QEMU_HARDDISK_3434c0cd-230e-4587-95bc-9baf80b8630f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565205 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.565220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:58:50.565304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part1', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part14', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part15', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part16', 'scsi-SQEMU_QEMU_HARDDISK_5ce6454b-2fbf-482a-841d-170d05af2df9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98453abf--c748--514f--aec7--544322a7c940-osd--block--98453abf--c748--514f--aec7--544322a7c940'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qZ2Fis-kumD-iBYV-dEZI-JiTk-Stdf-rfQvuQ', 'scsi-0QEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d', 'scsi-SQEMU_QEMU_HARDDISK_41a828c4-aadc-4592-9baf-1de326a5c86d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f54bf35c--9381--504c--8591--afe4d3e61469-osd--block--f54bf35c--9381--504c--8591--afe4d3e61469'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EhrDgY-tX6U-FBj6-Aknv-1OX0-9TAa-8r8edJ', 'scsi-0QEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783', 'scsi-SQEMU_QEMU_HARDDISK_4238a5d3-6f9a-453b-8646-1f6e7fcf7783'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123', 'scsi-SQEMU_QEMU_HARDDISK_10380154-7d57-4db6-80c5-fea690e2f123'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:58:50.565417 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.565432 | orchestrator | 2025-05-04 00:58:50.565446 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-04 00:58:50.565460 | orchestrator | Sunday 04 May 2025 00:57:00 +0000 (0:00:00.625) 0:00:18.917 ************ 2025-05-04 00:58:50.565475 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-04 00:58:50.565489 | orchestrator | 2025-05-04 00:58:50.565503 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-04 00:58:50.565517 | orchestrator | Sunday 04 May 2025 00:57:02 +0000 (0:00:01.431) 0:00:20.349 ************ 2025-05-04 00:58:50.565531 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.565545 | orchestrator | 2025-05-04 00:58:50.565560 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-04 00:58:50.565574 | orchestrator | Sunday 04 May 2025 00:57:02 +0000 (0:00:00.174) 0:00:20.523 ************ 2025-05-04 00:58:50.565588 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.565602 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.565617 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.565631 | orchestrator | 2025-05-04 00:58:50.565645 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-04 00:58:50.565659 | orchestrator | Sunday 04 May 2025 00:57:02 +0000 (0:00:00.379) 0:00:20.903 ************ 2025-05-04 00:58:50.565673 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.565687 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.565702 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.565716 | orchestrator | 2025-05-04 00:58:50.565730 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-04 00:58:50.565744 | orchestrator | Sunday 04 May 2025 00:57:03 +0000 (0:00:00.697) 0:00:21.601 ************ 2025-05-04 00:58:50.565758 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.565801 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.565816 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.565830 | orchestrator | 2025-05-04 00:58:50.565845 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-04 00:58:50.565859 | orchestrator | Sunday 04 May 2025 00:57:03 +0000 (0:00:00.299) 0:00:21.900 ************ 2025-05-04 00:58:50.565873 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.565887 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.565901 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.565915 | orchestrator | 2025-05-04 00:58:50.565929 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-04 00:58:50.565951 | orchestrator | Sunday 04 May 2025 00:57:04 +0000 (0:00:01.001) 0:00:22.902 ************ 2025-05-04 00:58:50.565965 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.565979 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.565993 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.566007 | orchestrator | 2025-05-04 00:58:50.566075 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-04 00:58:50.566093 | orchestrator | Sunday 04 May 2025 00:57:04 +0000 (0:00:00.327) 0:00:23.230 ************ 2025-05-04 00:58:50.566107 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.566122 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.566136 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.566150 | orchestrator | 2025-05-04 00:58:50.566164 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-04 00:58:50.566179 | orchestrator | Sunday 04 May 2025 00:57:05 +0000 (0:00:00.508) 0:00:23.738 ************ 2025-05-04 00:58:50.566193 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.566207 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.566221 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.566235 | orchestrator | 2025-05-04 00:58:50.566249 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-04 00:58:50.566263 | orchestrator | Sunday 04 May 2025 00:57:05 +0000 (0:00:00.392) 0:00:24.131 ************ 2025-05-04 00:58:50.566277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:58:50.566291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:58:50.566305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:58:50.566319 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:58:50.566339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:58:50.566354 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.566369 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:58:50.566382 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:58:50.566397 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:58:50.566411 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.566425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:58:50.566439 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.566453 | orchestrator | 2025-05-04 00:58:50.566468 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-04 00:58:50.566490 | orchestrator | Sunday 04 May 2025 00:57:07 +0000 (0:00:01.268) 0:00:25.399 ************ 2025-05-04 00:58:50.566505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:58:50.566519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:58:50.566533 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:58:50.566547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:58:50.566561 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.566576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:58:50.566590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:58:50.566604 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:58:50.566618 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.566632 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:58:50.566646 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:58:50.566660 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.566674 | orchestrator | 2025-05-04 00:58:50.566688 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-04 00:58:50.566702 | orchestrator | Sunday 04 May 2025 00:57:07 +0000 (0:00:00.710) 0:00:26.109 ************ 2025-05-04 00:58:50.566724 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-04 00:58:50.566738 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-04 00:58:50.566753 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-04 00:58:50.566788 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-04 00:58:50.566804 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-04 00:58:50.567040 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-04 00:58:50.567059 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-04 00:58:50.567073 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-04 00:58:50.567087 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-04 00:58:50.567101 | orchestrator | 2025-05-04 00:58:50.567115 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-04 00:58:50.567129 | orchestrator | Sunday 04 May 2025 00:57:09 +0000 (0:00:01.975) 0:00:28.085 ************ 2025-05-04 00:58:50.567144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:58:50.567158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:58:50.567172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:58:50.567186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:58:50.567200 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:58:50.567214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:58:50.567227 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.567241 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.567255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:58:50.567269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:58:50.567283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:58:50.567297 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.567311 | orchestrator | 2025-05-04 00:58:50.567325 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-04 00:58:50.567339 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.638) 0:00:28.724 ************ 2025-05-04 00:58:50.567353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-04 00:58:50.567367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-04 00:58:50.567381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-04 00:58:50.567395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-04 00:58:50.567408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-04 00:58:50.567422 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.567436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-04 00:58:50.567450 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.567464 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-04 00:58:50.567478 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-04 00:58:50.567491 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-04 00:58:50.567505 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.567519 | orchestrator | 2025-05-04 00:58:50.567533 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-04 00:58:50.567547 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.414) 0:00:29.139 ************ 2025-05-04 00:58:50.567561 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:58:50.567576 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:58:50.567590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:58:50.567605 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:58:50.567627 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:58:50.567641 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:58:50.567656 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.567670 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.567684 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-04 00:58:50.567706 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:58:50.567721 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:58:50.567735 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.567749 | orchestrator | 2025-05-04 00:58:50.567925 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-04 00:58:50.567968 | orchestrator | Sunday 04 May 2025 00:57:11 +0000 (0:00:00.475) 0:00:29.614 ************ 2025-05-04 00:58:50.567979 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 00:58:50.567990 | orchestrator | 2025-05-04 00:58:50.568001 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-04 00:58:50.568012 | orchestrator | Sunday 04 May 2025 00:57:12 +0000 (0:00:00.750) 0:00:30.365 ************ 2025-05-04 00:58:50.568022 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568032 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568042 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568053 | orchestrator | 2025-05-04 00:58:50.568063 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-04 00:58:50.568073 | orchestrator | Sunday 04 May 2025 00:57:12 +0000 (0:00:00.339) 0:00:30.705 ************ 2025-05-04 00:58:50.568083 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568094 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568104 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568114 | orchestrator | 2025-05-04 00:58:50.568125 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-04 00:58:50.568135 | orchestrator | Sunday 04 May 2025 00:57:12 +0000 (0:00:00.333) 0:00:31.038 ************ 2025-05-04 00:58:50.568145 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568155 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568165 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568176 | orchestrator | 2025-05-04 00:58:50.568186 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-04 00:58:50.568196 | orchestrator | Sunday 04 May 2025 00:57:13 +0000 (0:00:00.325) 0:00:31.364 ************ 2025-05-04 00:58:50.568207 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.568217 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.568227 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.568245 | orchestrator | 2025-05-04 00:58:50.568255 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-04 00:58:50.568265 | orchestrator | Sunday 04 May 2025 00:57:13 +0000 (0:00:00.649) 0:00:32.014 ************ 2025-05-04 00:58:50.568276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:58:50.568286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:58:50.568297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:58:50.568307 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568317 | orchestrator | 2025-05-04 00:58:50.568327 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-04 00:58:50.568338 | orchestrator | Sunday 04 May 2025 00:57:14 +0000 (0:00:00.402) 0:00:32.417 ************ 2025-05-04 00:58:50.568348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:58:50.568358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:58:50.568384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:58:50.568395 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568406 | orchestrator | 2025-05-04 00:58:50.568416 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-04 00:58:50.568427 | orchestrator | Sunday 04 May 2025 00:57:14 +0000 (0:00:00.426) 0:00:32.843 ************ 2025-05-04 00:58:50.568437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:58:50.568447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:58:50.568458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:58:50.568468 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568478 | orchestrator | 2025-05-04 00:58:50.568488 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:58:50.568499 | orchestrator | Sunday 04 May 2025 00:57:14 +0000 (0:00:00.432) 0:00:33.276 ************ 2025-05-04 00:58:50.568509 | orchestrator | ok: [testbed-node-3] 2025-05-04 00:58:50.568519 | orchestrator | ok: [testbed-node-4] 2025-05-04 00:58:50.568529 | orchestrator | ok: [testbed-node-5] 2025-05-04 00:58:50.568540 | orchestrator | 2025-05-04 00:58:50.568550 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-04 00:58:50.568564 | orchestrator | Sunday 04 May 2025 00:57:15 +0000 (0:00:00.370) 0:00:33.646 ************ 2025-05-04 00:58:50.568595 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-04 00:58:50.568606 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-04 00:58:50.568616 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-04 00:58:50.568626 | orchestrator | 2025-05-04 00:58:50.568637 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-04 00:58:50.568647 | orchestrator | Sunday 04 May 2025 00:57:16 +0000 (0:00:01.049) 0:00:34.695 ************ 2025-05-04 00:58:50.568657 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568667 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568678 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568688 | orchestrator | 2025-05-04 00:58:50.568698 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-04 00:58:50.568708 | orchestrator | Sunday 04 May 2025 00:57:16 +0000 (0:00:00.576) 0:00:35.271 ************ 2025-05-04 00:58:50.568719 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568729 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568739 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568749 | orchestrator | 2025-05-04 00:58:50.568759 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-04 00:58:50.568806 | orchestrator | Sunday 04 May 2025 00:57:17 +0000 (0:00:00.375) 0:00:35.647 ************ 2025-05-04 00:58:50.568818 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-04 00:58:50.568829 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568839 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-04 00:58:50.568849 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568859 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-04 00:58:50.568869 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568879 | orchestrator | 2025-05-04 00:58:50.568890 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-04 00:58:50.568900 | orchestrator | Sunday 04 May 2025 00:57:17 +0000 (0:00:00.487) 0:00:36.135 ************ 2025-05-04 00:58:50.568911 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-04 00:58:50.568921 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.568932 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-04 00:58:50.568943 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.568953 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-04 00:58:50.568969 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.568980 | orchestrator | 2025-05-04 00:58:50.568991 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-04 00:58:50.569007 | orchestrator | Sunday 04 May 2025 00:57:18 +0000 (0:00:00.292) 0:00:36.427 ************ 2025-05-04 00:58:50.569024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-04 00:58:50.569041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-04 00:58:50.569057 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-04 00:58:50.569069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-04 00:58:50.569079 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.569090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-04 00:58:50.569100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-04 00:58:50.569110 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-04 00:58:50.569120 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.569130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-04 00:58:50.569141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-04 00:58:50.569151 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.569161 | orchestrator | 2025-05-04 00:58:50.569172 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-04 00:58:50.569183 | orchestrator | Sunday 04 May 2025 00:57:18 +0000 (0:00:00.760) 0:00:37.187 ************ 2025-05-04 00:58:50.569193 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.569203 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.569214 | orchestrator | skipping: [testbed-node-5] 2025-05-04 00:58:50.569224 | orchestrator | 2025-05-04 00:58:50.569234 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-04 00:58:50.569285 | orchestrator | Sunday 04 May 2025 00:57:19 +0000 (0:00:00.287) 0:00:37.475 ************ 2025-05-04 00:58:50.569297 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-04 00:58:50.569307 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:58:50.569318 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:58:50.569328 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-04 00:58:50.569339 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-04 00:58:50.569349 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-04 00:58:50.569359 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-04 00:58:50.569370 | orchestrator | 2025-05-04 00:58:50.569380 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-04 00:58:50.569390 | orchestrator | Sunday 04 May 2025 00:57:20 +0000 (0:00:00.904) 0:00:38.380 ************ 2025-05-04 00:58:50.569400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-04 00:58:50.569410 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:58:50.569420 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:58:50.569431 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-04 00:58:50.569441 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-04 00:58:50.569451 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-04 00:58:50.569462 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-04 00:58:50.569472 | orchestrator | 2025-05-04 00:58:50.569482 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-04 00:58:50.569500 | orchestrator | Sunday 04 May 2025 00:57:21 +0000 (0:00:01.637) 0:00:40.017 ************ 2025-05-04 00:58:50.569510 | orchestrator | skipping: [testbed-node-3] 2025-05-04 00:58:50.569521 | orchestrator | skipping: [testbed-node-4] 2025-05-04 00:58:50.569531 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-04 00:58:50.569541 | orchestrator | 2025-05-04 00:58:50.569551 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-04 00:58:50.569572 | orchestrator | Sunday 04 May 2025 00:57:22 +0000 (0:00:00.521) 0:00:40.538 ************ 2025-05-04 00:58:50.569584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:58:50.569597 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:58:50.569607 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:58:50.569618 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:58:50.569628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-04 00:58:50.569638 | orchestrator | 2025-05-04 00:58:50.569649 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-04 00:58:50.569659 | orchestrator | Sunday 04 May 2025 00:58:02 +0000 (0:00:39.770) 0:01:20.308 ************ 2025-05-04 00:58:50.569669 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569690 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569700 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569710 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569720 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569730 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-04 00:58:50.569740 | orchestrator | 2025-05-04 00:58:50.569751 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-04 00:58:50.569783 | orchestrator | Sunday 04 May 2025 00:58:22 +0000 (0:00:20.510) 0:01:40.819 ************ 2025-05-04 00:58:50.569795 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569806 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569816 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569826 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569836 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569854 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569864 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-04 00:58:50.569874 | orchestrator | 2025-05-04 00:58:50.569885 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-04 00:58:50.569895 | orchestrator | Sunday 04 May 2025 00:58:31 +0000 (0:00:09.208) 0:01:50.027 ************ 2025-05-04 00:58:50.569905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569915 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-04 00:58:50.569926 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-04 00:58:50.569936 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569946 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-04 00:58:50.569956 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-04 00:58:50.569967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.569977 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-04 00:58:50.569987 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-04 00:58:50.569997 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:50.570007 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-04 00:58:50.570053 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-04 00:58:53.616460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:53.616625 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-04 00:58:53.616646 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-04 00:58:53.616662 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-04 00:58:53.616677 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-04 00:58:53.616691 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-04 00:58:53.616707 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-04 00:58:53.616722 | orchestrator | 2025-05-04 00:58:53.616737 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:58:53.616753 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-04 00:58:53.616816 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-04 00:58:53.616832 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-04 00:58:53.616846 | orchestrator | 2025-05-04 00:58:53.616861 | orchestrator | 2025-05-04 00:58:53.616876 | orchestrator | 2025-05-04 00:58:53.616890 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:58:53.616904 | orchestrator | Sunday 04 May 2025 00:58:49 +0000 (0:00:18.125) 0:02:08.152 ************ 2025-05-04 00:58:53.616919 | orchestrator | =============================================================================== 2025-05-04 00:58:53.616933 | orchestrator | create openstack pool(s) ----------------------------------------------- 39.77s 2025-05-04 00:58:53.616948 | orchestrator | generate keys ---------------------------------------------------------- 20.51s 2025-05-04 00:58:53.616962 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.13s 2025-05-04 00:58:53.617006 | orchestrator | get keys from monitors -------------------------------------------------- 9.21s 2025-05-04 00:58:53.617023 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.38s 2025-05-04 00:58:53.617039 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.98s 2025-05-04 00:58:53.617054 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.64s 2025-05-04 00:58:53.617070 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.43s 2025-05-04 00:58:53.617086 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.36s 2025-05-04 00:58:53.617102 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.27s 2025-05-04 00:58:53.617118 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 1.05s 2025-05-04 00:58:53.617134 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.00s 2025-05-04 00:58:53.617151 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.90s 2025-05-04 00:58:53.617167 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.87s 2025-05-04 00:58:53.617182 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.83s 2025-05-04 00:58:53.617196 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.76s 2025-05-04 00:58:53.617210 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.75s 2025-05-04 00:58:53.617224 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.72s 2025-05-04 00:58:53.617239 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.71s 2025-05-04 00:58:53.617253 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2025-05-04 00:58:53.617267 | orchestrator | 2025-05-04 00:58:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:53.617283 | orchestrator | 2025-05-04 00:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:53.617317 | orchestrator | 2025-05-04 00:58:53 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:53.618096 | orchestrator | 2025-05-04 00:58:53 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:58:53.621057 | orchestrator | 2025-05-04 00:58:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:53.621412 | orchestrator | 2025-05-04 00:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:56.668984 | orchestrator | 2025-05-04 00:58:56 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:56.672265 | orchestrator | 2025-05-04 00:58:56 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:58:56.675310 | orchestrator | 2025-05-04 00:58:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:58:59.725518 | orchestrator | 2025-05-04 00:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:58:59.725670 | orchestrator | 2025-05-04 00:58:59 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:58:59.726519 | orchestrator | 2025-05-04 00:58:59 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:58:59.727950 | orchestrator | 2025-05-04 00:58:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:02.791079 | orchestrator | 2025-05-04 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:02.791275 | orchestrator | 2025-05-04 00:59:02 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:02.794252 | orchestrator | 2025-05-04 00:59:02 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:02.797312 | orchestrator | 2025-05-04 00:59:02 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:02.799734 | orchestrator | 2025-05-04 00:59:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:05.844847 | orchestrator | 2025-05-04 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:05.845019 | orchestrator | 2025-05-04 00:59:05 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:05.847853 | orchestrator | 2025-05-04 00:59:05 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:05.850096 | orchestrator | 2025-05-04 00:59:05 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:05.851923 | orchestrator | 2025-05-04 00:59:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:08.902685 | orchestrator | 2025-05-04 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:08.902911 | orchestrator | 2025-05-04 00:59:08 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:08.905637 | orchestrator | 2025-05-04 00:59:08 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:08.907699 | orchestrator | 2025-05-04 00:59:08 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:08.909804 | orchestrator | 2025-05-04 00:59:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:11.970668 | orchestrator | 2025-05-04 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:11.970885 | orchestrator | 2025-05-04 00:59:11 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:11.972933 | orchestrator | 2025-05-04 00:59:11 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:11.974616 | orchestrator | 2025-05-04 00:59:11 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:11.976091 | orchestrator | 2025-05-04 00:59:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:15.027057 | orchestrator | 2025-05-04 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:15.027202 | orchestrator | 2025-05-04 00:59:15 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:15.029050 | orchestrator | 2025-05-04 00:59:15 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:15.030859 | orchestrator | 2025-05-04 00:59:15 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:15.034196 | orchestrator | 2025-05-04 00:59:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:15.034237 | orchestrator | 2025-05-04 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:18.085287 | orchestrator | 2025-05-04 00:59:18 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:18.087607 | orchestrator | 2025-05-04 00:59:18 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:18.090164 | orchestrator | 2025-05-04 00:59:18 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:18.091884 | orchestrator | 2025-05-04 00:59:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:21.146307 | orchestrator | 2025-05-04 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:21.146482 | orchestrator | 2025-05-04 00:59:21 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:21.147821 | orchestrator | 2025-05-04 00:59:21 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:21.150132 | orchestrator | 2025-05-04 00:59:21 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:21.151553 | orchestrator | 2025-05-04 00:59:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:24.202323 | orchestrator | 2025-05-04 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:24.202521 | orchestrator | 2025-05-04 00:59:24 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:24.202551 | orchestrator | 2025-05-04 00:59:24 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:24.204407 | orchestrator | 2025-05-04 00:59:24 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:24.206307 | orchestrator | 2025-05-04 00:59:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:27.251745 | orchestrator | 2025-05-04 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:27.251939 | orchestrator | 2025-05-04 00:59:27 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:27.253189 | orchestrator | 2025-05-04 00:59:27 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:27.255745 | orchestrator | 2025-05-04 00:59:27 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state STARTED 2025-05-04 00:59:27.257686 | orchestrator | 2025-05-04 00:59:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:30.305051 | orchestrator | 2025-05-04 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:30.305200 | orchestrator | 2025-05-04 00:59:30 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:30.307591 | orchestrator | 2025-05-04 00:59:30 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:30.307654 | orchestrator | 2025-05-04 00:59:30.307672 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-04 00:59:30.307687 | orchestrator | 2025-05-04 00:59:30.307702 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-04 00:59:30.307716 | orchestrator | 2025-05-04 00:59:30.307731 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-04 00:59:30.307836 | orchestrator | Sunday 04 May 2025 00:59:02 +0000 (0:00:00.444) 0:00:00.444 ************ 2025-05-04 00:59:30.307858 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-04 00:59:30.307874 | orchestrator | 2025-05-04 00:59:30.307889 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-04 00:59:30.307904 | orchestrator | Sunday 04 May 2025 00:59:02 +0000 (0:00:00.210) 0:00:00.654 ************ 2025-05-04 00:59:30.307919 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:59:30.307933 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-04 00:59:30.307948 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-04 00:59:30.307962 | orchestrator | 2025-05-04 00:59:30.307977 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-04 00:59:30.308029 | orchestrator | Sunday 04 May 2025 00:59:03 +0000 (0:00:00.860) 0:00:01.515 ************ 2025-05-04 00:59:30.308046 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-04 00:59:30.308061 | orchestrator | 2025-05-04 00:59:30.308075 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-04 00:59:30.308118 | orchestrator | Sunday 04 May 2025 00:59:03 +0000 (0:00:00.228) 0:00:01.743 ************ 2025-05-04 00:59:30.308133 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308148 | orchestrator | 2025-05-04 00:59:30.308165 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-04 00:59:30.308181 | orchestrator | Sunday 04 May 2025 00:59:04 +0000 (0:00:00.639) 0:00:02.382 ************ 2025-05-04 00:59:30.308198 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308214 | orchestrator | 2025-05-04 00:59:30.308230 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-04 00:59:30.308246 | orchestrator | Sunday 04 May 2025 00:59:04 +0000 (0:00:00.138) 0:00:02.521 ************ 2025-05-04 00:59:30.308263 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308278 | orchestrator | 2025-05-04 00:59:30.308292 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-04 00:59:30.308307 | orchestrator | Sunday 04 May 2025 00:59:04 +0000 (0:00:00.468) 0:00:02.990 ************ 2025-05-04 00:59:30.308321 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308335 | orchestrator | 2025-05-04 00:59:30.308349 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-04 00:59:30.308363 | orchestrator | Sunday 04 May 2025 00:59:05 +0000 (0:00:00.139) 0:00:03.129 ************ 2025-05-04 00:59:30.308378 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308392 | orchestrator | 2025-05-04 00:59:30.308406 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-04 00:59:30.308420 | orchestrator | Sunday 04 May 2025 00:59:05 +0000 (0:00:00.136) 0:00:03.265 ************ 2025-05-04 00:59:30.308434 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308448 | orchestrator | 2025-05-04 00:59:30.308462 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-04 00:59:30.308477 | orchestrator | Sunday 04 May 2025 00:59:05 +0000 (0:00:00.171) 0:00:03.437 ************ 2025-05-04 00:59:30.308491 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.308506 | orchestrator | 2025-05-04 00:59:30.308521 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-04 00:59:30.308535 | orchestrator | Sunday 04 May 2025 00:59:05 +0000 (0:00:00.138) 0:00:03.576 ************ 2025-05-04 00:59:30.308549 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308564 | orchestrator | 2025-05-04 00:59:30.308578 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-04 00:59:30.308592 | orchestrator | Sunday 04 May 2025 00:59:05 +0000 (0:00:00.329) 0:00:03.905 ************ 2025-05-04 00:59:30.308606 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:59:30.308621 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:59:30.308635 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:59:30.308649 | orchestrator | 2025-05-04 00:59:30.308663 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-04 00:59:30.308677 | orchestrator | Sunday 04 May 2025 00:59:06 +0000 (0:00:00.726) 0:00:04.632 ************ 2025-05-04 00:59:30.308692 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.308706 | orchestrator | 2025-05-04 00:59:30.308720 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-04 00:59:30.308734 | orchestrator | Sunday 04 May 2025 00:59:06 +0000 (0:00:00.250) 0:00:04.882 ************ 2025-05-04 00:59:30.308748 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:59:30.308762 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:59:30.308810 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:59:30.308825 | orchestrator | 2025-05-04 00:59:30.308839 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-04 00:59:30.308854 | orchestrator | Sunday 04 May 2025 00:59:08 +0000 (0:00:01.953) 0:00:06.836 ************ 2025-05-04 00:59:30.308876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:59:30.308890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:59:30.308904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:59:30.308919 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.308933 | orchestrator | 2025-05-04 00:59:30.308947 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-04 00:59:30.308972 | orchestrator | Sunday 04 May 2025 00:59:09 +0000 (0:00:00.445) 0:00:07.282 ************ 2025-05-04 00:59:30.308993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-04 00:59:30.309010 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-04 00:59:30.309025 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-04 00:59:30.309039 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309053 | orchestrator | 2025-05-04 00:59:30.309068 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-04 00:59:30.309082 | orchestrator | Sunday 04 May 2025 00:59:10 +0000 (0:00:00.805) 0:00:08.087 ************ 2025-05-04 00:59:30.309098 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:59:30.309114 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:59:30.309128 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-04 00:59:30.309142 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309157 | orchestrator | 2025-05-04 00:59:30.309171 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-04 00:59:30.309185 | orchestrator | Sunday 04 May 2025 00:59:10 +0000 (0:00:00.193) 0:00:08.281 ************ 2025-05-04 00:59:30.309204 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'b8af287fe1aa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-04 00:59:07.468882', 'end': '2025-05-04 00:59:07.508858', 'delta': '0:00:00.039976', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8af287fe1aa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-04 00:59:30.309229 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '975183f2321c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-04 00:59:08.047742', 'end': '2025-05-04 00:59:08.089201', 'delta': '0:00:00.041459', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['975183f2321c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-04 00:59:30.309253 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'ef264e3af733', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-04 00:59:08.578413', 'end': '2025-05-04 00:59:08.617159', 'delta': '0:00:00.038746', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ef264e3af733'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-04 00:59:30.309268 | orchestrator | 2025-05-04 00:59:30.309283 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-04 00:59:30.309298 | orchestrator | Sunday 04 May 2025 00:59:10 +0000 (0:00:00.205) 0:00:08.487 ************ 2025-05-04 00:59:30.309312 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.309326 | orchestrator | 2025-05-04 00:59:30.309341 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-04 00:59:30.309355 | orchestrator | Sunday 04 May 2025 00:59:10 +0000 (0:00:00.447) 0:00:08.935 ************ 2025-05-04 00:59:30.309370 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-04 00:59:30.309384 | orchestrator | 2025-05-04 00:59:30.309398 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-04 00:59:30.309413 | orchestrator | Sunday 04 May 2025 00:59:12 +0000 (0:00:01.366) 0:00:10.301 ************ 2025-05-04 00:59:30.309427 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309441 | orchestrator | 2025-05-04 00:59:30.309456 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-04 00:59:30.309470 | orchestrator | Sunday 04 May 2025 00:59:12 +0000 (0:00:00.144) 0:00:10.446 ************ 2025-05-04 00:59:30.309485 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309499 | orchestrator | 2025-05-04 00:59:30.309513 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-04 00:59:30.309527 | orchestrator | Sunday 04 May 2025 00:59:12 +0000 (0:00:00.236) 0:00:10.682 ************ 2025-05-04 00:59:30.309541 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309556 | orchestrator | 2025-05-04 00:59:30.309570 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-04 00:59:30.309585 | orchestrator | Sunday 04 May 2025 00:59:12 +0000 (0:00:00.121) 0:00:10.804 ************ 2025-05-04 00:59:30.309599 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.309613 | orchestrator | 2025-05-04 00:59:30.309628 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-04 00:59:30.309642 | orchestrator | Sunday 04 May 2025 00:59:12 +0000 (0:00:00.139) 0:00:10.943 ************ 2025-05-04 00:59:30.309657 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309671 | orchestrator | 2025-05-04 00:59:30.309685 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-04 00:59:30.309700 | orchestrator | Sunday 04 May 2025 00:59:13 +0000 (0:00:00.232) 0:00:11.176 ************ 2025-05-04 00:59:30.309720 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309735 | orchestrator | 2025-05-04 00:59:30.309749 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-04 00:59:30.309787 | orchestrator | Sunday 04 May 2025 00:59:13 +0000 (0:00:00.125) 0:00:11.301 ************ 2025-05-04 00:59:30.309803 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309818 | orchestrator | 2025-05-04 00:59:30.309832 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-04 00:59:30.309846 | orchestrator | Sunday 04 May 2025 00:59:13 +0000 (0:00:00.118) 0:00:11.420 ************ 2025-05-04 00:59:30.309861 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309875 | orchestrator | 2025-05-04 00:59:30.309889 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-04 00:59:30.309904 | orchestrator | Sunday 04 May 2025 00:59:13 +0000 (0:00:00.128) 0:00:11.548 ************ 2025-05-04 00:59:30.309918 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.309934 | orchestrator | 2025-05-04 00:59:30.309948 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-04 00:59:30.309968 | orchestrator | Sunday 04 May 2025 00:59:13 +0000 (0:00:00.135) 0:00:11.684 ************ 2025-05-04 00:59:30.309983 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310005 | orchestrator | 2025-05-04 00:59:30.310082 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-04 00:59:30.310101 | orchestrator | Sunday 04 May 2025 00:59:13 +0000 (0:00:00.352) 0:00:12.036 ************ 2025-05-04 00:59:30.310116 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310131 | orchestrator | 2025-05-04 00:59:30.310146 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-04 00:59:30.310160 | orchestrator | Sunday 04 May 2025 00:59:14 +0000 (0:00:00.155) 0:00:12.192 ************ 2025-05-04 00:59:30.310174 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310189 | orchestrator | 2025-05-04 00:59:30.310203 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-04 00:59:30.310217 | orchestrator | Sunday 04 May 2025 00:59:14 +0000 (0:00:00.136) 0:00:12.329 ************ 2025-05-04 00:59:30.310232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-04 00:59:30.310386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part1', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part14', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part15', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part16', 'scsi-SQEMU_QEMU_HARDDISK_86e714b2-7a79-4481-bc1e-8874c98b655d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:59:30.310411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4ddea5b-b8af-4ee0-9445-5b6c1bebc06b', 'scsi-SQEMU_QEMU_HARDDISK_f4ddea5b-b8af-4ee0-9445-5b6c1bebc06b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:59:30.310435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44aea083-53c7-4db3-b476-f0e15c33499e', 'scsi-SQEMU_QEMU_HARDDISK_44aea083-53c7-4db3-b476-f0e15c33499e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:59:30.310451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f40ab83-2cd9-4bf4-a5ce-fe50f63fc73a', 'scsi-SQEMU_QEMU_HARDDISK_9f40ab83-2cd9-4bf4-a5ce-fe50f63fc73a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:59:30.310466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-04-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-04 00:59:30.310481 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310496 | orchestrator | 2025-05-04 00:59:30.310510 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-04 00:59:30.310525 | orchestrator | Sunday 04 May 2025 00:59:14 +0000 (0:00:00.287) 0:00:12.616 ************ 2025-05-04 00:59:30.310539 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310553 | orchestrator | 2025-05-04 00:59:30.310568 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-04 00:59:30.310582 | orchestrator | Sunday 04 May 2025 00:59:14 +0000 (0:00:00.251) 0:00:12.868 ************ 2025-05-04 00:59:30.310596 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310611 | orchestrator | 2025-05-04 00:59:30.310626 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-04 00:59:30.310640 | orchestrator | Sunday 04 May 2025 00:59:14 +0000 (0:00:00.140) 0:00:13.008 ************ 2025-05-04 00:59:30.310654 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.310668 | orchestrator | 2025-05-04 00:59:30.310683 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-04 00:59:30.310697 | orchestrator | Sunday 04 May 2025 00:59:15 +0000 (0:00:00.121) 0:00:13.130 ************ 2025-05-04 00:59:30.310717 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.310732 | orchestrator | 2025-05-04 00:59:30.310747 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-04 00:59:30.310761 | orchestrator | Sunday 04 May 2025 00:59:15 +0000 (0:00:00.470) 0:00:13.600 ************ 2025-05-04 00:59:30.310804 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.310830 | orchestrator | 2025-05-04 00:59:30.310845 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-04 00:59:30.310868 | orchestrator | Sunday 04 May 2025 00:59:15 +0000 (0:00:00.121) 0:00:13.721 ************ 2025-05-04 00:59:30.310882 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.310897 | orchestrator | 2025-05-04 00:59:30.310911 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-04 00:59:30.310925 | orchestrator | Sunday 04 May 2025 00:59:16 +0000 (0:00:00.451) 0:00:14.173 ************ 2025-05-04 00:59:30.310939 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.310954 | orchestrator | 2025-05-04 00:59:30.310968 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-04 00:59:30.310982 | orchestrator | Sunday 04 May 2025 00:59:16 +0000 (0:00:00.365) 0:00:14.539 ************ 2025-05-04 00:59:30.310996 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311011 | orchestrator | 2025-05-04 00:59:30.311025 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-04 00:59:30.311039 | orchestrator | Sunday 04 May 2025 00:59:16 +0000 (0:00:00.260) 0:00:14.799 ************ 2025-05-04 00:59:30.311054 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311068 | orchestrator | 2025-05-04 00:59:30.311082 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-04 00:59:30.311097 | orchestrator | Sunday 04 May 2025 00:59:16 +0000 (0:00:00.155) 0:00:14.954 ************ 2025-05-04 00:59:30.311111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:59:30.311125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:59:30.311140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:59:30.311154 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311168 | orchestrator | 2025-05-04 00:59:30.311183 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-04 00:59:30.311197 | orchestrator | Sunday 04 May 2025 00:59:17 +0000 (0:00:00.476) 0:00:15.431 ************ 2025-05-04 00:59:30.311211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:59:30.311226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:59:30.311240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:59:30.311254 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311269 | orchestrator | 2025-05-04 00:59:30.311284 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-04 00:59:30.311298 | orchestrator | Sunday 04 May 2025 00:59:17 +0000 (0:00:00.494) 0:00:15.925 ************ 2025-05-04 00:59:30.311312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:59:30.311327 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-04 00:59:30.311341 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-04 00:59:30.311362 | orchestrator | 2025-05-04 00:59:30.311385 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-04 00:59:30.311424 | orchestrator | Sunday 04 May 2025 00:59:19 +0000 (0:00:01.216) 0:00:17.141 ************ 2025-05-04 00:59:30.311444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:59:30.311459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:59:30.311473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:59:30.311487 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311501 | orchestrator | 2025-05-04 00:59:30.311516 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-04 00:59:30.311530 | orchestrator | Sunday 04 May 2025 00:59:19 +0000 (0:00:00.205) 0:00:17.347 ************ 2025-05-04 00:59:30.311544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-04 00:59:30.311559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-04 00:59:30.311573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-04 00:59:30.311587 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311602 | orchestrator | 2025-05-04 00:59:30.311616 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-04 00:59:30.311638 | orchestrator | Sunday 04 May 2025 00:59:19 +0000 (0:00:00.215) 0:00:17.562 ************ 2025-05-04 00:59:30.311652 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-04 00:59:30.311667 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-04 00:59:30.311681 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-04 00:59:30.311696 | orchestrator | 2025-05-04 00:59:30.311710 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-04 00:59:30.311724 | orchestrator | Sunday 04 May 2025 00:59:19 +0000 (0:00:00.188) 0:00:17.750 ************ 2025-05-04 00:59:30.311739 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311753 | orchestrator | 2025-05-04 00:59:30.311793 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-04 00:59:30.311819 | orchestrator | Sunday 04 May 2025 00:59:19 +0000 (0:00:00.128) 0:00:17.879 ************ 2025-05-04 00:59:30.311835 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:30.311850 | orchestrator | 2025-05-04 00:59:30.311864 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-04 00:59:30.311878 | orchestrator | Sunday 04 May 2025 00:59:20 +0000 (0:00:00.343) 0:00:18.222 ************ 2025-05-04 00:59:30.311892 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:59:30.311915 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:59:30.311930 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:59:30.311944 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-04 00:59:30.311958 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-04 00:59:30.311973 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-04 00:59:30.311988 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-04 00:59:30.312002 | orchestrator | 2025-05-04 00:59:30.312016 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-04 00:59:30.312030 | orchestrator | Sunday 04 May 2025 00:59:20 +0000 (0:00:00.848) 0:00:19.071 ************ 2025-05-04 00:59:30.312045 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-04 00:59:30.312059 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-04 00:59:30.312074 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-04 00:59:30.312088 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-04 00:59:30.312102 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-04 00:59:30.312116 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-04 00:59:30.312130 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-04 00:59:30.312144 | orchestrator | 2025-05-04 00:59:30.312158 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-04 00:59:30.312172 | orchestrator | Sunday 04 May 2025 00:59:22 +0000 (0:00:01.556) 0:00:20.627 ************ 2025-05-04 00:59:30.312186 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:30.312201 | orchestrator | 2025-05-04 00:59:30.312215 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-04 00:59:30.312229 | orchestrator | Sunday 04 May 2025 00:59:23 +0000 (0:00:00.467) 0:00:21.095 ************ 2025-05-04 00:59:30.312243 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 00:59:30.312257 | orchestrator | 2025-05-04 00:59:30.312271 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-04 00:59:30.312293 | orchestrator | Sunday 04 May 2025 00:59:23 +0000 (0:00:00.641) 0:00:21.736 ************ 2025-05-04 00:59:30.312307 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-04 00:59:30.312327 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-04 00:59:30.312342 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-04 00:59:30.312356 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-04 00:59:30.312370 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-04 00:59:30.312385 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-04 00:59:30.312399 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-04 00:59:30.312413 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-04 00:59:30.312427 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-04 00:59:30.312441 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-04 00:59:30.312456 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-04 00:59:30.312470 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-04 00:59:30.312484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-04 00:59:30.312498 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-04 00:59:30.312513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-04 00:59:30.312527 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-04 00:59:30.312541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-04 00:59:30.312556 | orchestrator | 2025-05-04 00:59:30.312570 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:59:30.312585 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-04 00:59:30.312600 | orchestrator | 2025-05-04 00:59:30.312614 | orchestrator | 2025-05-04 00:59:30.312628 | orchestrator | 2025-05-04 00:59:30.312643 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:59:30.312656 | orchestrator | Sunday 04 May 2025 00:59:29 +0000 (0:00:05.440) 0:00:27.176 ************ 2025-05-04 00:59:30.312671 | orchestrator | =============================================================================== 2025-05-04 00:59:30.312685 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.44s 2025-05-04 00:59:30.312700 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.95s 2025-05-04 00:59:30.312714 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.56s 2025-05-04 00:59:30.312734 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.37s 2025-05-04 00:59:33.366139 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.22s 2025-05-04 00:59:33.366272 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.86s 2025-05-04 00:59:33.366292 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.85s 2025-05-04 00:59:33.366307 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.81s 2025-05-04 00:59:33.366322 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-05-04 00:59:33.366336 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.64s 2025-05-04 00:59:33.366351 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.64s 2025-05-04 00:59:33.366395 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.49s 2025-05-04 00:59:33.366410 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.48s 2025-05-04 00:59:33.366442 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.47s 2025-05-04 00:59:33.366457 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.47s 2025-05-04 00:59:33.366471 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.47s 2025-05-04 00:59:33.366486 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.45s 2025-05-04 00:59:33.366500 | orchestrator | ceph-facts : set_fact _container_exec_cmd ------------------------------- 0.45s 2025-05-04 00:59:33.366515 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.45s 2025-05-04 00:59:33.366529 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.37s 2025-05-04 00:59:33.366544 | orchestrator | 2025-05-04 00:59:30 | INFO  | Task 39791044-3bc8-4550-927e-eb938744d28b is in state SUCCESS 2025-05-04 00:59:33.366560 | orchestrator | 2025-05-04 00:59:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:33.366574 | orchestrator | 2025-05-04 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:33.366606 | orchestrator | 2025-05-04 00:59:33 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state STARTED 2025-05-04 00:59:33.367239 | orchestrator | 2025-05-04 00:59:33 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state STARTED 2025-05-04 00:59:33.368929 | orchestrator | 2025-05-04 00:59:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:36.430091 | orchestrator | 2025-05-04 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:36.430292 | orchestrator | 2025-05-04 00:59:36.430338 | orchestrator | 2025-05-04 00:59:36.430357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 00:59:36.430374 | orchestrator | 2025-05-04 00:59:36.430391 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 00:59:36.430407 | orchestrator | Sunday 04 May 2025 00:57:03 +0000 (0:00:00.321) 0:00:00.321 ************ 2025-05-04 00:59:36.430423 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:36.430440 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:59:36.430456 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:59:36.430472 | orchestrator | 2025-05-04 00:59:36.430488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 00:59:36.430504 | orchestrator | Sunday 04 May 2025 00:57:04 +0000 (0:00:00.495) 0:00:00.816 ************ 2025-05-04 00:59:36.430520 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-04 00:59:36.430536 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-04 00:59:36.430553 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-04 00:59:36.430569 | orchestrator | 2025-05-04 00:59:36.430585 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-04 00:59:36.430600 | orchestrator | 2025-05-04 00:59:36.430616 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-04 00:59:36.430633 | orchestrator | Sunday 04 May 2025 00:57:04 +0000 (0:00:00.318) 0:00:01.135 ************ 2025-05-04 00:59:36.430649 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:59:36.430679 | orchestrator | 2025-05-04 00:59:36.430694 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-04 00:59:36.430708 | orchestrator | Sunday 04 May 2025 00:57:05 +0000 (0:00:00.777) 0:00:01.912 ************ 2025-05-04 00:59:36.430727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.430810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.430841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.430860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.430877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.430900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.430915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.430931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.430946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.430960 | orchestrator | 2025-05-04 00:59:36.430975 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-04 00:59:36.430996 | orchestrator | Sunday 04 May 2025 00:57:08 +0000 (0:00:02.869) 0:00:04.782 ************ 2025-05-04 00:59:36.431011 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-04 00:59:36.431026 | orchestrator | 2025-05-04 00:59:36.431040 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-04 00:59:36.431054 | orchestrator | Sunday 04 May 2025 00:57:09 +0000 (0:00:00.833) 0:00:05.616 ************ 2025-05-04 00:59:36.431068 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:36.431082 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:59:36.431097 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:59:36.431111 | orchestrator | 2025-05-04 00:59:36.431125 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-04 00:59:36.431139 | orchestrator | Sunday 04 May 2025 00:57:09 +0000 (0:00:00.530) 0:00:06.147 ************ 2025-05-04 00:59:36.431156 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 00:59:36.431179 | orchestrator | 2025-05-04 00:59:36.431203 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-04 00:59:36.431238 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.445) 0:00:06.592 ************ 2025-05-04 00:59:36.431262 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:59:36.431286 | orchestrator | 2025-05-04 00:59:36.431307 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-04 00:59:36.431322 | orchestrator | Sunday 04 May 2025 00:57:10 +0000 (0:00:00.688) 0:00:07.281 ************ 2025-05-04 00:59:36.431338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.431354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.431377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.431394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.431416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.431432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.431446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.431461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.431475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.431490 | orchestrator | 2025-05-04 00:59:36.431504 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-04 00:59:36.431519 | orchestrator | Sunday 04 May 2025 00:57:14 +0000 (0:00:03.351) 0:00:10.633 ************ 2025-05-04 00:59:36.431541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:59:36.431564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.431579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:59:36.431594 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.431608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:59:36.431624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.431646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:59:36.431668 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.431683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:59:36.431699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.431714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:59:36.431728 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.431743 | orchestrator | 2025-05-04 00:59:36.431758 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-04 00:59:36.431791 | orchestrator | Sunday 04 May 2025 00:57:15 +0000 (0:00:00.981) 0:00:11.615 ************ 2025-05-04 00:59:36.431806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:59:36.431836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.431852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:59:36.431867 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.431882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:59:36.431898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.431913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:59:36.431927 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.431950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-04 00:59:36.431973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.431988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-04 00:59:36.432003 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.432017 | orchestrator | 2025-05-04 00:59:36.432031 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-04 00:59:36.432058 | orchestrator | Sunday 04 May 2025 00:57:16 +0000 (0:00:01.489) 0:00:13.104 ************ 2025-05-04 00:59:36.432073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.432089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.432124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-05-04 00:59:36 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:36.432552 | orchestrator | 2025-05-04 00:59:36 | INFO  | Task af3a8e90-7a1d-4203-93fe-b59dcad7a99e is in state SUCCESS 2025-05-04 00:59:36.432584 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.432602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432710 | orchestrator | 2025-05-04 00:59:36.432725 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-04 00:59:36.432740 | orchestrator | Sunday 04 May 2025 00:57:20 +0000 (0:00:03.491) 0:00:16.596 ************ 2025-05-04 00:59:36.432754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.432846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.432876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.432902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.432927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.432943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.432958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.432995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.433010 | orchestrator | 2025-05-04 00:59:36.433022 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-04 00:59:36.433035 | orchestrator | Sunday 04 May 2025 00:57:25 +0000 (0:00:05.788) 0:00:22.385 ************ 2025-05-04 00:59:36.433048 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.433061 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:59:36.433074 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:59:36.433086 | orchestrator | 2025-05-04 00:59:36.433099 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-04 00:59:36.433111 | orchestrator | Sunday 04 May 2025 00:57:27 +0000 (0:00:01.830) 0:00:24.215 ************ 2025-05-04 00:59:36.433124 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.433136 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.433149 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.433161 | orchestrator | 2025-05-04 00:59:36.433181 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-04 00:59:36.433195 | orchestrator | Sunday 04 May 2025 00:57:28 +0000 (0:00:00.930) 0:00:25.146 ************ 2025-05-04 00:59:36.433209 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.433223 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.433237 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.433251 | orchestrator | 2025-05-04 00:59:36.433265 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-04 00:59:36.433278 | orchestrator | Sunday 04 May 2025 00:57:28 +0000 (0:00:00.334) 0:00:25.480 ************ 2025-05-04 00:59:36.433292 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.433307 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.433320 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.433334 | orchestrator | 2025-05-04 00:59:36.433348 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-04 00:59:36.433362 | orchestrator | Sunday 04 May 2025 00:57:29 +0000 (0:00:00.359) 0:00:25.839 ************ 2025-05-04 00:59:36.433377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.433400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.433416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.433431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.433452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.433466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-04 00:59:36.433479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.433498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.433512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.433524 | orchestrator | 2025-05-04 00:59:36.433537 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-04 00:59:36.433550 | orchestrator | Sunday 04 May 2025 00:57:31 +0000 (0:00:02.380) 0:00:28.219 ************ 2025-05-04 00:59:36.433562 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.433575 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.433588 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.433600 | orchestrator | 2025-05-04 00:59:36.433613 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-04 00:59:36.433625 | orchestrator | Sunday 04 May 2025 00:57:31 +0000 (0:00:00.319) 0:00:28.539 ************ 2025-05-04 00:59:36.433638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-04 00:59:36.433651 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-04 00:59:36.433669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-04 00:59:36.433682 | orchestrator | 2025-05-04 00:59:36.433695 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-04 00:59:36.433707 | orchestrator | Sunday 04 May 2025 00:57:34 +0000 (0:00:02.442) 0:00:30.981 ************ 2025-05-04 00:59:36.433720 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 00:59:36.433732 | orchestrator | 2025-05-04 00:59:36.433745 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-04 00:59:36.433757 | orchestrator | Sunday 04 May 2025 00:57:35 +0000 (0:00:00.640) 0:00:31.621 ************ 2025-05-04 00:59:36.433793 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.433824 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.433838 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.433850 | orchestrator | 2025-05-04 00:59:36.433863 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-04 00:59:36.433876 | orchestrator | Sunday 04 May 2025 00:57:36 +0000 (0:00:01.238) 0:00:32.860 ************ 2025-05-04 00:59:36.433888 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-04 00:59:36.433907 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-04 00:59:36.433920 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 00:59:36.433933 | orchestrator | 2025-05-04 00:59:36.433945 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-04 00:59:36.433958 | orchestrator | Sunday 04 May 2025 00:57:37 +0000 (0:00:00.891) 0:00:33.752 ************ 2025-05-04 00:59:36.433971 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:36.433983 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:59:36.433996 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:59:36.434008 | orchestrator | 2025-05-04 00:59:36.434056 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-04 00:59:36.434070 | orchestrator | Sunday 04 May 2025 00:57:37 +0000 (0:00:00.282) 0:00:34.034 ************ 2025-05-04 00:59:36.434083 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-04 00:59:36.434096 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-04 00:59:36.434108 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-04 00:59:36.434121 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-04 00:59:36.434134 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-04 00:59:36.434146 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-04 00:59:36.434159 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-04 00:59:36.434172 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-04 00:59:36.434185 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-04 00:59:36.434197 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-04 00:59:36.434210 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-04 00:59:36.434223 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-04 00:59:36.434236 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-04 00:59:36.434248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-04 00:59:36.434266 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-04 00:59:36.434280 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-04 00:59:36.434293 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-04 00:59:36.434310 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-04 00:59:36.434323 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-04 00:59:36.434335 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-04 00:59:36.434348 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-04 00:59:36.434361 | orchestrator | 2025-05-04 00:59:36.434373 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-04 00:59:36.434386 | orchestrator | Sunday 04 May 2025 00:57:47 +0000 (0:00:09.911) 0:00:43.945 ************ 2025-05-04 00:59:36.434399 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-04 00:59:36.434411 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-04 00:59:36.434431 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-04 00:59:36.434443 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-04 00:59:36.434456 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-04 00:59:36.434475 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-04 00:59:36.434488 | orchestrator | 2025-05-04 00:59:36.434501 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-04 00:59:36.434513 | orchestrator | Sunday 04 May 2025 00:57:50 +0000 (0:00:03.330) 0:00:47.276 ************ 2025-05-04 00:59:36.434526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.434541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.434554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-04 00:59:36.434568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.434595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.434610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-04 00:59:36.434623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.434636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.434652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-04 00:59:36.434673 | orchestrator | 2025-05-04 00:59:36.434695 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-04 00:59:36.434716 | orchestrator | Sunday 04 May 2025 00:57:53 +0000 (0:00:02.979) 0:00:50.255 ************ 2025-05-04 00:59:36.434735 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.434755 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.434847 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.434869 | orchestrator | 2025-05-04 00:59:36.434888 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-04 00:59:36.434909 | orchestrator | Sunday 04 May 2025 00:57:54 +0000 (0:00:00.345) 0:00:50.601 ************ 2025-05-04 00:59:36.434930 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.434951 | orchestrator | 2025-05-04 00:59:36.434968 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-04 00:59:36.434982 | orchestrator | Sunday 04 May 2025 00:57:56 +0000 (0:00:02.442) 0:00:53.043 ************ 2025-05-04 00:59:36.434993 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435003 | orchestrator | 2025-05-04 00:59:36.435013 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-04 00:59:36.435023 | orchestrator | Sunday 04 May 2025 00:57:58 +0000 (0:00:02.218) 0:00:55.262 ************ 2025-05-04 00:59:36.435033 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:59:36.435043 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:36.435054 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:59:36.435064 | orchestrator | 2025-05-04 00:59:36.435074 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-04 00:59:36.435085 | orchestrator | Sunday 04 May 2025 00:58:00 +0000 (0:00:01.302) 0:00:56.564 ************ 2025-05-04 00:59:36.435095 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:36.435111 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:59:36.435122 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:59:36.435132 | orchestrator | 2025-05-04 00:59:36.435143 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-04 00:59:36.435153 | orchestrator | Sunday 04 May 2025 00:58:00 +0000 (0:00:00.366) 0:00:56.930 ************ 2025-05-04 00:59:36.435164 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.435174 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:36.435184 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:36.435195 | orchestrator | 2025-05-04 00:59:36.435205 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-04 00:59:36.435216 | orchestrator | Sunday 04 May 2025 00:58:00 +0000 (0:00:00.511) 0:00:57.442 ************ 2025-05-04 00:59:36.435226 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435236 | orchestrator | 2025-05-04 00:59:36.435247 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-04 00:59:36.435257 | orchestrator | Sunday 04 May 2025 00:58:13 +0000 (0:00:12.301) 0:01:09.743 ************ 2025-05-04 00:59:36.435267 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435278 | orchestrator | 2025-05-04 00:59:36.435288 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-04 00:59:36.435298 | orchestrator | Sunday 04 May 2025 00:58:22 +0000 (0:00:08.844) 0:01:18.587 ************ 2025-05-04 00:59:36.435308 | orchestrator | 2025-05-04 00:59:36.435319 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-04 00:59:36.435329 | orchestrator | Sunday 04 May 2025 00:58:22 +0000 (0:00:00.063) 0:01:18.650 ************ 2025-05-04 00:59:36.435339 | orchestrator | 2025-05-04 00:59:36.435349 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-04 00:59:36.435365 | orchestrator | Sunday 04 May 2025 00:58:22 +0000 (0:00:00.055) 0:01:18.706 ************ 2025-05-04 00:59:36.435375 | orchestrator | 2025-05-04 00:59:36.435386 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-04 00:59:36.435397 | orchestrator | Sunday 04 May 2025 00:58:22 +0000 (0:00:00.058) 0:01:18.765 ************ 2025-05-04 00:59:36.435407 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435417 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:59:36.435427 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:59:36.435437 | orchestrator | 2025-05-04 00:59:36.435447 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-04 00:59:36.435458 | orchestrator | Sunday 04 May 2025 00:58:36 +0000 (0:00:13.854) 0:01:32.620 ************ 2025-05-04 00:59:36.435475 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:59:36.435485 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:59:36.435495 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435505 | orchestrator | 2025-05-04 00:59:36.435516 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-04 00:59:36.435526 | orchestrator | Sunday 04 May 2025 00:58:43 +0000 (0:00:07.819) 0:01:40.440 ************ 2025-05-04 00:59:36.435536 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435546 | orchestrator | changed: [testbed-node-1] 2025-05-04 00:59:36.435557 | orchestrator | changed: [testbed-node-2] 2025-05-04 00:59:36.435567 | orchestrator | 2025-05-04 00:59:36.435577 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-04 00:59:36.435588 | orchestrator | Sunday 04 May 2025 00:58:54 +0000 (0:00:10.713) 0:01:51.153 ************ 2025-05-04 00:59:36.435598 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 00:59:36.435608 | orchestrator | 2025-05-04 00:59:36.435618 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-04 00:59:36.435629 | orchestrator | Sunday 04 May 2025 00:58:55 +0000 (0:00:00.879) 0:01:52.033 ************ 2025-05-04 00:59:36.435639 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:36.435649 | orchestrator | ok: [testbed-node-1] 2025-05-04 00:59:36.435659 | orchestrator | ok: [testbed-node-2] 2025-05-04 00:59:36.435669 | orchestrator | 2025-05-04 00:59:36.435680 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-04 00:59:36.435690 | orchestrator | Sunday 04 May 2025 00:58:56 +0000 (0:00:01.081) 0:01:53.114 ************ 2025-05-04 00:59:36.435700 | orchestrator | changed: [testbed-node-0] 2025-05-04 00:59:36.435710 | orchestrator | 2025-05-04 00:59:36.435721 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-04 00:59:36.435731 | orchestrator | Sunday 04 May 2025 00:58:58 +0000 (0:00:01.509) 0:01:54.624 ************ 2025-05-04 00:59:36.435741 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-04 00:59:36.435751 | orchestrator | 2025-05-04 00:59:36.435761 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-04 00:59:36.435792 | orchestrator | Sunday 04 May 2025 00:59:06 +0000 (0:00:08.409) 0:02:03.034 ************ 2025-05-04 00:59:36.435803 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-04 00:59:36.435813 | orchestrator | 2025-05-04 00:59:36.435824 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-04 00:59:36.435834 | orchestrator | Sunday 04 May 2025 00:59:24 +0000 (0:00:17.996) 0:02:21.030 ************ 2025-05-04 00:59:36.435844 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-04 00:59:36.435855 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-04 00:59:36.435865 | orchestrator | 2025-05-04 00:59:36.435875 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-04 00:59:36.435893 | orchestrator | Sunday 04 May 2025 00:59:30 +0000 (0:00:06.238) 0:02:27.268 ************ 2025-05-04 00:59:36.435904 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.435914 | orchestrator | 2025-05-04 00:59:36.435924 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-04 00:59:36.435935 | orchestrator | Sunday 04 May 2025 00:59:30 +0000 (0:00:00.132) 0:02:27.400 ************ 2025-05-04 00:59:36.435945 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:36.435956 | orchestrator | 2025-05-04 00:59:36.435966 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-04 00:59:36.435981 | orchestrator | Sunday 04 May 2025 00:59:30 +0000 (0:00:00.127) 0:02:27.528 ************ 2025-05-04 00:59:39.513822 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:39.514093 | orchestrator | 2025-05-04 00:59:39.514124 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-04 00:59:39.514166 | orchestrator | Sunday 04 May 2025 00:59:31 +0000 (0:00:00.140) 0:02:27.668 ************ 2025-05-04 00:59:39.514181 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:39.514196 | orchestrator | 2025-05-04 00:59:39.514210 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-04 00:59:39.514224 | orchestrator | Sunday 04 May 2025 00:59:31 +0000 (0:00:00.454) 0:02:28.123 ************ 2025-05-04 00:59:39.514239 | orchestrator | ok: [testbed-node-0] 2025-05-04 00:59:39.514254 | orchestrator | 2025-05-04 00:59:39.514268 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-04 00:59:39.514283 | orchestrator | Sunday 04 May 2025 00:59:34 +0000 (0:00:03.190) 0:02:31.314 ************ 2025-05-04 00:59:39.514297 | orchestrator | skipping: [testbed-node-0] 2025-05-04 00:59:39.514312 | orchestrator | skipping: [testbed-node-1] 2025-05-04 00:59:39.514326 | orchestrator | skipping: [testbed-node-2] 2025-05-04 00:59:39.514340 | orchestrator | 2025-05-04 00:59:39.514363 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 00:59:39.514379 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-04 00:59:39.514394 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-04 00:59:39.514409 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-04 00:59:39.514423 | orchestrator | 2025-05-04 00:59:39.514437 | orchestrator | 2025-05-04 00:59:39.514451 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 00:59:39.514466 | orchestrator | Sunday 04 May 2025 00:59:35 +0000 (0:00:00.527) 0:02:31.841 ************ 2025-05-04 00:59:39.514480 | orchestrator | =============================================================================== 2025-05-04 00:59:39.514494 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.00s 2025-05-04 00:59:39.514508 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 13.85s 2025-05-04 00:59:39.514522 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.30s 2025-05-04 00:59:39.514536 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.71s 2025-05-04 00:59:39.514550 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.91s 2025-05-04 00:59:39.514565 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.84s 2025-05-04 00:59:39.514579 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.41s 2025-05-04 00:59:39.514593 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.82s 2025-05-04 00:59:39.514607 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.24s 2025-05-04 00:59:39.514621 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.79s 2025-05-04 00:59:39.514635 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.49s 2025-05-04 00:59:39.514649 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.35s 2025-05-04 00:59:39.514663 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.33s 2025-05-04 00:59:39.514677 | orchestrator | keystone : Creating default user role ----------------------------------- 3.19s 2025-05-04 00:59:39.514691 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.98s 2025-05-04 00:59:39.514705 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.87s 2025-05-04 00:59:39.514719 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.44s 2025-05-04 00:59:39.514734 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.44s 2025-05-04 00:59:39.514748 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.38s 2025-05-04 00:59:39.514897 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.22s 2025-05-04 00:59:39.514918 | orchestrator | 2025-05-04 00:59:36 | INFO  | Task 8e135965-e0dc-403c-9c28-92aa524c3c6a is in state SUCCESS 2025-05-04 00:59:39.514933 | orchestrator | 2025-05-04 00:59:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:39.514948 | orchestrator | 2025-05-04 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:39.514978 | orchestrator | 2025-05-04 00:59:39 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:39.515567 | orchestrator | 2025-05-04 00:59:39 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:39.515594 | orchestrator | 2025-05-04 00:59:39 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:39.515615 | orchestrator | 2025-05-04 00:59:39 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:39.516630 | orchestrator | 2025-05-04 00:59:39 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:39.516847 | orchestrator | 2025-05-04 00:59:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:42.565685 | orchestrator | 2025-05-04 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:42.565887 | orchestrator | 2025-05-04 00:59:42 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:42.572897 | orchestrator | 2025-05-04 00:59:42 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:42.574501 | orchestrator | 2025-05-04 00:59:42 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:42.577344 | orchestrator | 2025-05-04 00:59:42 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:42.579048 | orchestrator | 2025-05-04 00:59:42 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:42.579825 | orchestrator | 2025-05-04 00:59:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:42.580112 | orchestrator | 2025-05-04 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:45.618687 | orchestrator | 2025-05-04 00:59:45 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:45.619240 | orchestrator | 2025-05-04 00:59:45 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:45.621006 | orchestrator | 2025-05-04 00:59:45 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:45.622076 | orchestrator | 2025-05-04 00:59:45 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:45.624571 | orchestrator | 2025-05-04 00:59:45 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:45.624875 | orchestrator | 2025-05-04 00:59:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:45.625976 | orchestrator | 2025-05-04 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:48.668726 | orchestrator | 2025-05-04 00:59:48 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:48.669807 | orchestrator | 2025-05-04 00:59:48 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:48.670476 | orchestrator | 2025-05-04 00:59:48 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:48.671614 | orchestrator | 2025-05-04 00:59:48 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:48.673545 | orchestrator | 2025-05-04 00:59:48 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:48.674370 | orchestrator | 2025-05-04 00:59:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:51.727860 | orchestrator | 2025-05-04 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:51.728027 | orchestrator | 2025-05-04 00:59:51 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:51.728399 | orchestrator | 2025-05-04 00:59:51 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:51.728430 | orchestrator | 2025-05-04 00:59:51 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:51.728454 | orchestrator | 2025-05-04 00:59:51 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:51.730308 | orchestrator | 2025-05-04 00:59:51 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:51.730994 | orchestrator | 2025-05-04 00:59:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:51.731229 | orchestrator | 2025-05-04 00:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:54.800678 | orchestrator | 2025-05-04 00:59:54 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:54.801316 | orchestrator | 2025-05-04 00:59:54 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:54.802102 | orchestrator | 2025-05-04 00:59:54 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:54.802834 | orchestrator | 2025-05-04 00:59:54 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:54.803736 | orchestrator | 2025-05-04 00:59:54 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:54.804577 | orchestrator | 2025-05-04 00:59:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 00:59:54.804690 | orchestrator | 2025-05-04 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 00:59:57.852323 | orchestrator | 2025-05-04 00:59:57 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 00:59:57.854758 | orchestrator | 2025-05-04 00:59:57 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 00:59:57.859057 | orchestrator | 2025-05-04 00:59:57 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 00:59:57.859832 | orchestrator | 2025-05-04 00:59:57 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 00:59:57.860992 | orchestrator | 2025-05-04 00:59:57 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 00:59:57.862222 | orchestrator | 2025-05-04 00:59:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:00.912806 | orchestrator | 2025-05-04 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:00.912958 | orchestrator | 2025-05-04 01:00:00 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:00.917766 | orchestrator | 2025-05-04 01:00:00 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:03.970559 | orchestrator | 2025-05-04 01:00:00 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:03.970687 | orchestrator | 2025-05-04 01:00:00 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:03.970733 | orchestrator | 2025-05-04 01:00:00 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:03.970764 | orchestrator | 2025-05-04 01:00:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:03.970809 | orchestrator | 2025-05-04 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:03.970840 | orchestrator | 2025-05-04 01:00:03 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:03.972566 | orchestrator | 2025-05-04 01:00:03 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:03.974366 | orchestrator | 2025-05-04 01:00:03 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:03.976211 | orchestrator | 2025-05-04 01:00:03 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:03.979086 | orchestrator | 2025-05-04 01:00:03 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:03.981373 | orchestrator | 2025-05-04 01:00:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:07.030423 | orchestrator | 2025-05-04 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:07.030603 | orchestrator | 2025-05-04 01:00:07 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:07.031382 | orchestrator | 2025-05-04 01:00:07 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:07.031421 | orchestrator | 2025-05-04 01:00:07 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:07.032540 | orchestrator | 2025-05-04 01:00:07 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:07.033363 | orchestrator | 2025-05-04 01:00:07 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:07.035127 | orchestrator | 2025-05-04 01:00:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:10.088101 | orchestrator | 2025-05-04 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:10.088257 | orchestrator | 2025-05-04 01:00:10 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:10.090327 | orchestrator | 2025-05-04 01:00:10 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:10.091745 | orchestrator | 2025-05-04 01:00:10 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:10.093386 | orchestrator | 2025-05-04 01:00:10 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:10.094982 | orchestrator | 2025-05-04 01:00:10 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:10.096650 | orchestrator | 2025-05-04 01:00:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:13.152443 | orchestrator | 2025-05-04 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:13.152597 | orchestrator | 2025-05-04 01:00:13 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:13.154752 | orchestrator | 2025-05-04 01:00:13 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:13.155379 | orchestrator | 2025-05-04 01:00:13 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:13.155414 | orchestrator | 2025-05-04 01:00:13 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:13.156673 | orchestrator | 2025-05-04 01:00:13 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:13.157857 | orchestrator | 2025-05-04 01:00:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:16.224111 | orchestrator | 2025-05-04 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:16.224304 | orchestrator | 2025-05-04 01:00:16 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:16.224821 | orchestrator | 2025-05-04 01:00:16 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:16.224875 | orchestrator | 2025-05-04 01:00:16 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:16.224902 | orchestrator | 2025-05-04 01:00:16 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:16.224937 | orchestrator | 2025-05-04 01:00:16 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:16.226001 | orchestrator | 2025-05-04 01:00:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:19.269049 | orchestrator | 2025-05-04 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:19.269171 | orchestrator | 2025-05-04 01:00:19 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:19.270398 | orchestrator | 2025-05-04 01:00:19 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:19.270436 | orchestrator | 2025-05-04 01:00:19 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:19.273437 | orchestrator | 2025-05-04 01:00:19 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:19.274303 | orchestrator | 2025-05-04 01:00:19 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:19.275330 | orchestrator | 2025-05-04 01:00:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:22.317592 | orchestrator | 2025-05-04 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:22.317730 | orchestrator | 2025-05-04 01:00:22 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:22.318237 | orchestrator | 2025-05-04 01:00:22 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:22.318268 | orchestrator | 2025-05-04 01:00:22 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:22.318291 | orchestrator | 2025-05-04 01:00:22 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:22.318648 | orchestrator | 2025-05-04 01:00:22 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:22.319254 | orchestrator | 2025-05-04 01:00:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:25.364407 | orchestrator | 2025-05-04 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:25.364534 | orchestrator | 2025-05-04 01:00:25 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:25.365919 | orchestrator | 2025-05-04 01:00:25 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:25.369459 | orchestrator | 2025-05-04 01:00:25 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:25.370142 | orchestrator | 2025-05-04 01:00:25 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:25.373925 | orchestrator | 2025-05-04 01:00:25 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:25.374581 | orchestrator | 2025-05-04 01:00:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:28.404530 | orchestrator | 2025-05-04 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:28.404655 | orchestrator | 2025-05-04 01:00:28 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state STARTED 2025-05-04 01:00:28.404921 | orchestrator | 2025-05-04 01:00:28 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:28.404976 | orchestrator | 2025-05-04 01:00:28 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:28.405562 | orchestrator | 2025-05-04 01:00:28 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:28.406051 | orchestrator | 2025-05-04 01:00:28 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:28.406514 | orchestrator | 2025-05-04 01:00:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:31.450260 | orchestrator | 2025-05-04 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:31.450387 | orchestrator | 2025-05-04 01:00:31.450408 | orchestrator | 2025-05-04 01:00:31.450423 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-04 01:00:31.450438 | orchestrator | 2025-05-04 01:00:31.450453 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-04 01:00:31.450468 | orchestrator | Sunday 04 May 2025 00:58:53 +0000 (0:00:00.141) 0:00:00.141 ************ 2025-05-04 01:00:31.450482 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-04 01:00:31.450497 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-04 01:00:31.450511 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-04 01:00:31.450525 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-04 01:00:31.450540 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-04 01:00:31.450570 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-04 01:00:31.450585 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-04 01:00:31.450599 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-04 01:00:31.450613 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-04 01:00:31.450628 | orchestrator | 2025-05-04 01:00:31.450642 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-04 01:00:31.450656 | orchestrator | Sunday 04 May 2025 00:58:56 +0000 (0:00:03.135) 0:00:03.276 ************ 2025-05-04 01:00:31.450671 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-04 01:00:31.450685 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-04 01:00:31.450699 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-04 01:00:31.450713 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-04 01:00:31.450728 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-04 01:00:31.450742 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-04 01:00:31.450756 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-04 01:00:31.450771 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-04 01:00:31.450832 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-04 01:00:31.450852 | orchestrator | 2025-05-04 01:00:31.450869 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-04 01:00:31.450884 | orchestrator | Sunday 04 May 2025 00:58:56 +0000 (0:00:00.244) 0:00:03.520 ************ 2025-05-04 01:00:31.450900 | orchestrator | ok: [testbed-manager] => { 2025-05-04 01:00:31.450953 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-04 01:00:31.450973 | orchestrator | } 2025-05-04 01:00:31.450990 | orchestrator | 2025-05-04 01:00:31.451007 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-04 01:00:31.451023 | orchestrator | Sunday 04 May 2025 00:58:56 +0000 (0:00:00.164) 0:00:03.685 ************ 2025-05-04 01:00:31.451039 | orchestrator | changed: [testbed-manager] 2025-05-04 01:00:31.451062 | orchestrator | 2025-05-04 01:00:31.451096 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-04 01:00:31.451125 | orchestrator | Sunday 04 May 2025 00:59:29 +0000 (0:00:32.823) 0:00:36.509 ************ 2025-05-04 01:00:31.451149 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-04 01:00:31.451174 | orchestrator | 2025-05-04 01:00:31.451197 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-04 01:00:31.451222 | orchestrator | Sunday 04 May 2025 00:59:30 +0000 (0:00:00.502) 0:00:37.011 ************ 2025-05-04 01:00:31.451247 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-04 01:00:31.451267 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-04 01:00:31.451283 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-04 01:00:31.451297 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-04 01:00:31.451312 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-04 01:00:31.451339 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-04 01:00:31.452175 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-04 01:00:31.452206 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-04 01:00:31.452227 | orchestrator | 2025-05-04 01:00:31.452252 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-04 01:00:31.452276 | orchestrator | Sunday 04 May 2025 00:59:33 +0000 (0:00:02.942) 0:00:39.954 ************ 2025-05-04 01:00:31.452300 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:00:31.452324 | orchestrator | 2025-05-04 01:00:31.452347 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:00:31.452372 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 01:00:31.452414 | orchestrator | 2025-05-04 01:00:31.452440 | orchestrator | Sunday 04 May 2025 00:59:33 +0000 (0:00:00.041) 0:00:39.995 ************ 2025-05-04 01:00:31.452465 | orchestrator | =============================================================================== 2025-05-04 01:00:31.452489 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.82s 2025-05-04 01:00:31.452509 | orchestrator | Check ceph keys --------------------------------------------------------- 3.14s 2025-05-04 01:00:31.452524 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.94s 2025-05-04 01:00:31.452538 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.50s 2025-05-04 01:00:31.452560 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-05-04 01:00:31.452575 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-05-04 01:00:31.452589 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.04s 2025-05-04 01:00:31.452603 | orchestrator | 2025-05-04 01:00:31.452716 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task ccf201ba-7c82-4db1-9ef0-bc6825c7cb5e is in state SUCCESS 2025-05-04 01:00:31.452745 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:31.456165 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:31.456202 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:31.456567 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:31.457103 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:31.457730 | orchestrator | 2025-05-04 01:00:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:34.491884 | orchestrator | 2025-05-04 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:34.492016 | orchestrator | 2025-05-04 01:00:34 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:34.492625 | orchestrator | 2025-05-04 01:00:34 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:34.492663 | orchestrator | 2025-05-04 01:00:34 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:34.493137 | orchestrator | 2025-05-04 01:00:34 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:34.493841 | orchestrator | 2025-05-04 01:00:34 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:34.494495 | orchestrator | 2025-05-04 01:00:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:37.523124 | orchestrator | 2025-05-04 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:37.523244 | orchestrator | 2025-05-04 01:00:37 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:37.523483 | orchestrator | 2025-05-04 01:00:37 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:37.523518 | orchestrator | 2025-05-04 01:00:37 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:37.523881 | orchestrator | 2025-05-04 01:00:37 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:37.524390 | orchestrator | 2025-05-04 01:00:37 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:37.524914 | orchestrator | 2025-05-04 01:00:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:37.525047 | orchestrator | 2025-05-04 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:40.564549 | orchestrator | 2025-05-04 01:00:40 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:40.564876 | orchestrator | 2025-05-04 01:00:40 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:40.565893 | orchestrator | 2025-05-04 01:00:40 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:40.565931 | orchestrator | 2025-05-04 01:00:40 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:40.566781 | orchestrator | 2025-05-04 01:00:40 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:40.567968 | orchestrator | 2025-05-04 01:00:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:43.605351 | orchestrator | 2025-05-04 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:43.605478 | orchestrator | 2025-05-04 01:00:43 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:43.606144 | orchestrator | 2025-05-04 01:00:43 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:43.606247 | orchestrator | 2025-05-04 01:00:43 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:43.606475 | orchestrator | 2025-05-04 01:00:43 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:43.606957 | orchestrator | 2025-05-04 01:00:43 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:43.607659 | orchestrator | 2025-05-04 01:00:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:46.641536 | orchestrator | 2025-05-04 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:46.641661 | orchestrator | 2025-05-04 01:00:46 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:46.641995 | orchestrator | 2025-05-04 01:00:46 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:46.642433 | orchestrator | 2025-05-04 01:00:46 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:46.643053 | orchestrator | 2025-05-04 01:00:46 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:46.643695 | orchestrator | 2025-05-04 01:00:46 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:46.645199 | orchestrator | 2025-05-04 01:00:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:49.680698 | orchestrator | 2025-05-04 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:49.680866 | orchestrator | 2025-05-04 01:00:49 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:49.681426 | orchestrator | 2025-05-04 01:00:49 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:49.682372 | orchestrator | 2025-05-04 01:00:49 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:49.683196 | orchestrator | 2025-05-04 01:00:49 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:49.683851 | orchestrator | 2025-05-04 01:00:49 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:49.684599 | orchestrator | 2025-05-04 01:00:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:52.726444 | orchestrator | 2025-05-04 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:52.726571 | orchestrator | 2025-05-04 01:00:52 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:52.727219 | orchestrator | 2025-05-04 01:00:52 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:52.728402 | orchestrator | 2025-05-04 01:00:52 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:52.729554 | orchestrator | 2025-05-04 01:00:52 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:52.730705 | orchestrator | 2025-05-04 01:00:52 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:52.731997 | orchestrator | 2025-05-04 01:00:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:52.732393 | orchestrator | 2025-05-04 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:55.789757 | orchestrator | 2025-05-04 01:00:55 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:55.791134 | orchestrator | 2025-05-04 01:00:55 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:55.791883 | orchestrator | 2025-05-04 01:00:55 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:55.792873 | orchestrator | 2025-05-04 01:00:55 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:55.793483 | orchestrator | 2025-05-04 01:00:55 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:55.794169 | orchestrator | 2025-05-04 01:00:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:00:58.822472 | orchestrator | 2025-05-04 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:00:58.822701 | orchestrator | 2025-05-04 01:00:58 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:00:58.823357 | orchestrator | 2025-05-04 01:00:58 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:00:58.823395 | orchestrator | 2025-05-04 01:00:58 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:00:58.824004 | orchestrator | 2025-05-04 01:00:58 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:00:58.825509 | orchestrator | 2025-05-04 01:00:58 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:00:58.827595 | orchestrator | 2025-05-04 01:00:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:01.856508 | orchestrator | 2025-05-04 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:01.856631 | orchestrator | 2025-05-04 01:01:01 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state STARTED 2025-05-04 01:01:01.857641 | orchestrator | 2025-05-04 01:01:01 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:01.858908 | orchestrator | 2025-05-04 01:01:01 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:01.859981 | orchestrator | 2025-05-04 01:01:01 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:01.860896 | orchestrator | 2025-05-04 01:01:01 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:01.861803 | orchestrator | 2025-05-04 01:01:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:04.892312 | orchestrator | 2025-05-04 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:04.892437 | orchestrator | 2025-05-04 01:01:04.892457 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-04 01:01:04.892473 | orchestrator | 2025-05-04 01:01:04.892488 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-04 01:01:04.892503 | orchestrator | Sunday 04 May 2025 00:59:36 +0000 (0:00:00.206) 0:00:00.206 ************ 2025-05-04 01:01:04.892518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-04 01:01:04.892539 | orchestrator | 2025-05-04 01:01:04.892554 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-04 01:01:04.892568 | orchestrator | Sunday 04 May 2025 00:59:37 +0000 (0:00:00.223) 0:00:00.429 ************ 2025-05-04 01:01:04.892583 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-04 01:01:04.892597 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-04 01:01:04.892613 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-04 01:01:04.892627 | orchestrator | 2025-05-04 01:01:04.892641 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-04 01:01:04.892656 | orchestrator | Sunday 04 May 2025 00:59:38 +0000 (0:00:01.271) 0:00:01.701 ************ 2025-05-04 01:01:04.892671 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-04 01:01:04.892790 | orchestrator | 2025-05-04 01:01:04.892809 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-04 01:01:04.892882 | orchestrator | Sunday 04 May 2025 00:59:39 +0000 (0:00:01.189) 0:00:02.891 ************ 2025-05-04 01:01:04.892897 | orchestrator | changed: [testbed-manager] 2025-05-04 01:01:04.892919 | orchestrator | 2025-05-04 01:01:04.892933 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-04 01:01:04.892948 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.758) 0:00:03.649 ************ 2025-05-04 01:01:04.892962 | orchestrator | changed: [testbed-manager] 2025-05-04 01:01:04.892993 | orchestrator | 2025-05-04 01:01:04.893009 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-04 01:01:04.893023 | orchestrator | Sunday 04 May 2025 00:59:41 +0000 (0:00:00.807) 0:00:04.456 ************ 2025-05-04 01:01:04.893037 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-04 01:01:04.893052 | orchestrator | ok: [testbed-manager] 2025-05-04 01:01:04.893071 | orchestrator | 2025-05-04 01:01:04.893085 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-04 01:01:04.893100 | orchestrator | Sunday 04 May 2025 01:00:20 +0000 (0:00:39.389) 0:00:43.846 ************ 2025-05-04 01:01:04.893114 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-04 01:01:04.893129 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-04 01:01:04.893144 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-04 01:01:04.893159 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-04 01:01:04.893173 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-04 01:01:04.893188 | orchestrator | 2025-05-04 01:01:04.893202 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-04 01:01:04.893217 | orchestrator | Sunday 04 May 2025 01:00:23 +0000 (0:00:03.175) 0:00:47.021 ************ 2025-05-04 01:01:04.893231 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-04 01:01:04.893245 | orchestrator | 2025-05-04 01:01:04.893260 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-04 01:01:04.893274 | orchestrator | Sunday 04 May 2025 01:00:24 +0000 (0:00:00.419) 0:00:47.441 ************ 2025-05-04 01:01:04.893288 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:01:04.893327 | orchestrator | 2025-05-04 01:01:04.893435 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-04 01:01:04.893451 | orchestrator | Sunday 04 May 2025 01:00:24 +0000 (0:00:00.098) 0:00:47.539 ************ 2025-05-04 01:01:04.893466 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:01:04.893481 | orchestrator | 2025-05-04 01:01:04.893495 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-04 01:01:04.893509 | orchestrator | Sunday 04 May 2025 01:00:24 +0000 (0:00:00.288) 0:00:47.828 ************ 2025-05-04 01:01:04.893523 | orchestrator | changed: [testbed-manager] 2025-05-04 01:01:04.893538 | orchestrator | 2025-05-04 01:01:04.893552 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-04 01:01:04.893566 | orchestrator | Sunday 04 May 2025 01:00:25 +0000 (0:00:01.445) 0:00:49.273 ************ 2025-05-04 01:01:04.893581 | orchestrator | changed: [testbed-manager] 2025-05-04 01:01:04.893595 | orchestrator | 2025-05-04 01:01:04.893609 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-04 01:01:04.893624 | orchestrator | Sunday 04 May 2025 01:00:26 +0000 (0:00:00.821) 0:00:50.095 ************ 2025-05-04 01:01:04.893638 | orchestrator | changed: [testbed-manager] 2025-05-04 01:01:04.893652 | orchestrator | 2025-05-04 01:01:04.893667 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-04 01:01:04.893681 | orchestrator | Sunday 04 May 2025 01:00:27 +0000 (0:00:00.437) 0:00:50.533 ************ 2025-05-04 01:01:04.893696 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-04 01:01:04.893716 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-04 01:01:04.893731 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-04 01:01:04.893745 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-04 01:01:04.893760 | orchestrator | 2025-05-04 01:01:04.893774 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:01:04.893788 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-04 01:01:04.893804 | orchestrator | 2025-05-04 01:01:04.893853 | orchestrator | Sunday 04 May 2025 01:00:28 +0000 (0:00:01.171) 0:00:51.704 ************ 2025-05-04 01:01:04.894218 | orchestrator | =============================================================================== 2025-05-04 01:01:04.894246 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.39s 2025-05-04 01:01:04.894261 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.18s 2025-05-04 01:01:04.894275 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.45s 2025-05-04 01:01:04.894290 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.27s 2025-05-04 01:01:04.894304 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.19s 2025-05-04 01:01:04.894318 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.17s 2025-05-04 01:01:04.894333 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.82s 2025-05-04 01:01:04.894347 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2025-05-04 01:01:04.894361 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.76s 2025-05-04 01:01:04.894375 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.44s 2025-05-04 01:01:04.894390 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2025-05-04 01:01:04.894404 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-05-04 01:01:04.894419 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-05-04 01:01:04.894433 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2025-05-04 01:01:04.894447 | orchestrator | 2025-05-04 01:01:04.894462 | orchestrator | 2025-05-04 01:01:04 | INFO  | Task a048ad2a-35c0-4148-9491-b658fd05dc36 is in state SUCCESS 2025-05-04 01:01:04.894488 | orchestrator | 2025-05-04 01:01:04 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:04.894503 | orchestrator | 2025-05-04 01:01:04 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:04.894525 | orchestrator | 2025-05-04 01:01:04 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:04.894948 | orchestrator | 2025-05-04 01:01:04 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:04.895644 | orchestrator | 2025-05-04 01:01:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:04.895838 | orchestrator | 2025-05-04 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:07.928154 | orchestrator | 2025-05-04 01:01:07 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:07.928563 | orchestrator | 2025-05-04 01:01:07 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:07.929419 | orchestrator | 2025-05-04 01:01:07 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:07.930145 | orchestrator | 2025-05-04 01:01:07 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:07.930885 | orchestrator | 2025-05-04 01:01:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:07.931132 | orchestrator | 2025-05-04 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:10.967078 | orchestrator | 2025-05-04 01:01:10 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:10.967469 | orchestrator | 2025-05-04 01:01:10 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:10.968353 | orchestrator | 2025-05-04 01:01:10 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:10.968865 | orchestrator | 2025-05-04 01:01:10 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:10.969547 | orchestrator | 2025-05-04 01:01:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:14.023301 | orchestrator | 2025-05-04 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:14.023521 | orchestrator | 2025-05-04 01:01:14 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:14.024975 | orchestrator | 2025-05-04 01:01:14 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:14.025015 | orchestrator | 2025-05-04 01:01:14 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:14.025647 | orchestrator | 2025-05-04 01:01:14 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:14.030154 | orchestrator | 2025-05-04 01:01:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:17.064470 | orchestrator | 2025-05-04 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:17.064788 | orchestrator | 2025-05-04 01:01:17 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:17.065498 | orchestrator | 2025-05-04 01:01:17 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:17.065554 | orchestrator | 2025-05-04 01:01:17 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:17.066337 | orchestrator | 2025-05-04 01:01:17 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:17.067093 | orchestrator | 2025-05-04 01:01:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:20.115373 | orchestrator | 2025-05-04 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:20.115532 | orchestrator | 2025-05-04 01:01:20 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:20.119691 | orchestrator | 2025-05-04 01:01:20 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:23.164970 | orchestrator | 2025-05-04 01:01:20 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:23.165079 | orchestrator | 2025-05-04 01:01:20 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:23.165120 | orchestrator | 2025-05-04 01:01:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:23.165136 | orchestrator | 2025-05-04 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:23.165164 | orchestrator | 2025-05-04 01:01:23 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:23.165717 | orchestrator | 2025-05-04 01:01:23 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:23.166489 | orchestrator | 2025-05-04 01:01:23 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:23.168073 | orchestrator | 2025-05-04 01:01:23 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:23.168602 | orchestrator | 2025-05-04 01:01:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:23.168816 | orchestrator | 2025-05-04 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:26.210313 | orchestrator | 2025-05-04 01:01:26 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:26.211817 | orchestrator | 2025-05-04 01:01:26 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:26.212434 | orchestrator | 2025-05-04 01:01:26 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:26.213189 | orchestrator | 2025-05-04 01:01:26 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:26.213944 | orchestrator | 2025-05-04 01:01:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:29.254479 | orchestrator | 2025-05-04 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:29.254625 | orchestrator | 2025-05-04 01:01:29 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:29.255244 | orchestrator | 2025-05-04 01:01:29 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:29.255296 | orchestrator | 2025-05-04 01:01:29 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:29.255311 | orchestrator | 2025-05-04 01:01:29 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:29.255336 | orchestrator | 2025-05-04 01:01:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:32.311421 | orchestrator | 2025-05-04 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:32.311551 | orchestrator | 2025-05-04 01:01:32 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:32.311774 | orchestrator | 2025-05-04 01:01:32 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:32.312422 | orchestrator | 2025-05-04 01:01:32 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:32.313102 | orchestrator | 2025-05-04 01:01:32 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:32.315073 | orchestrator | 2025-05-04 01:01:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:32.315180 | orchestrator | 2025-05-04 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:35.378365 | orchestrator | 2025-05-04 01:01:35 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:35.378513 | orchestrator | 2025-05-04 01:01:35 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:35.378823 | orchestrator | 2025-05-04 01:01:35 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:35.378898 | orchestrator | 2025-05-04 01:01:35 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:35.379992 | orchestrator | 2025-05-04 01:01:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:38.412643 | orchestrator | 2025-05-04 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:38.412966 | orchestrator | 2025-05-04 01:01:38 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:38.414161 | orchestrator | 2025-05-04 01:01:38 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:38.414205 | orchestrator | 2025-05-04 01:01:38 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:38.414930 | orchestrator | 2025-05-04 01:01:38 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:38.416960 | orchestrator | 2025-05-04 01:01:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:41.450204 | orchestrator | 2025-05-04 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:41.450331 | orchestrator | 2025-05-04 01:01:41 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:41.450685 | orchestrator | 2025-05-04 01:01:41 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:41.451694 | orchestrator | 2025-05-04 01:01:41 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:41.452788 | orchestrator | 2025-05-04 01:01:41 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:41.454790 | orchestrator | 2025-05-04 01:01:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:41.454935 | orchestrator | 2025-05-04 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:44.487345 | orchestrator | 2025-05-04 01:01:44 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:44.489863 | orchestrator | 2025-05-04 01:01:44 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:44.493883 | orchestrator | 2025-05-04 01:01:44 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:44.493940 | orchestrator | 2025-05-04 01:01:44 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:44.493967 | orchestrator | 2025-05-04 01:01:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:47.531417 | orchestrator | 2025-05-04 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:47.531547 | orchestrator | 2025-05-04 01:01:47 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:47.531770 | orchestrator | 2025-05-04 01:01:47 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:47.533146 | orchestrator | 2025-05-04 01:01:47 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:47.533720 | orchestrator | 2025-05-04 01:01:47 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:47.534313 | orchestrator | 2025-05-04 01:01:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:50.570683 | orchestrator | 2025-05-04 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:50.570831 | orchestrator | 2025-05-04 01:01:50 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:50.571626 | orchestrator | 2025-05-04 01:01:50 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:50.571677 | orchestrator | 2025-05-04 01:01:50 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:50.572417 | orchestrator | 2025-05-04 01:01:50 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:50.573282 | orchestrator | 2025-05-04 01:01:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:53.621270 | orchestrator | 2025-05-04 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:53.621417 | orchestrator | 2025-05-04 01:01:53 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:53.621823 | orchestrator | 2025-05-04 01:01:53 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:53.621981 | orchestrator | 2025-05-04 01:01:53 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:53.622181 | orchestrator | 2025-05-04 01:01:53 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:53.623981 | orchestrator | 2025-05-04 01:01:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:56.652094 | orchestrator | 2025-05-04 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:56.652216 | orchestrator | 2025-05-04 01:01:56 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:56.652618 | orchestrator | 2025-05-04 01:01:56 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:56.653192 | orchestrator | 2025-05-04 01:01:56 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:56.653222 | orchestrator | 2025-05-04 01:01:56 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:56.653736 | orchestrator | 2025-05-04 01:01:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:01:56.653905 | orchestrator | 2025-05-04 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:01:59.690561 | orchestrator | 2025-05-04 01:01:59 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:01:59.692630 | orchestrator | 2025-05-04 01:01:59 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:01:59.694464 | orchestrator | 2025-05-04 01:01:59 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:01:59.695978 | orchestrator | 2025-05-04 01:01:59 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:01:59.697323 | orchestrator | 2025-05-04 01:01:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:02.725786 | orchestrator | 2025-05-04 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:02.725956 | orchestrator | 2025-05-04 01:02:02 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:02.726455 | orchestrator | 2025-05-04 01:02:02 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:02.727056 | orchestrator | 2025-05-04 01:02:02 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state STARTED 2025-05-04 01:02:02.727999 | orchestrator | 2025-05-04 01:02:02 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:02.729019 | orchestrator | 2025-05-04 01:02:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:05.756411 | orchestrator | 2025-05-04 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:05.756551 | orchestrator | 2025-05-04 01:02:05 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:05.756704 | orchestrator | 2025-05-04 01:02:05 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:05.757732 | orchestrator | 2025-05-04 01:02:05 | INFO  | Task 7234ab5b-99b5-4775-827e-6ecbec11990d is in state SUCCESS 2025-05-04 01:02:05.759851 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-04 01:02:05.759919 | orchestrator | 2025-05-04 01:02:05.759935 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-04 01:02:05.759950 | orchestrator | 2025-05-04 01:02:05.759965 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-04 01:02:05.759980 | orchestrator | Sunday 04 May 2025 01:00:31 +0000 (0:00:00.393) 0:00:00.393 ************ 2025-05-04 01:02:05.759995 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760019 | orchestrator | 2025-05-04 01:02:05.760033 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-04 01:02:05.760047 | orchestrator | Sunday 04 May 2025 01:00:32 +0000 (0:00:01.301) 0:00:01.695 ************ 2025-05-04 01:02:05.760062 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760076 | orchestrator | 2025-05-04 01:02:05.760091 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-04 01:02:05.760105 | orchestrator | Sunday 04 May 2025 01:00:33 +0000 (0:00:00.911) 0:00:02.607 ************ 2025-05-04 01:02:05.760119 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760134 | orchestrator | 2025-05-04 01:02:05.760148 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-04 01:02:05.760162 | orchestrator | Sunday 04 May 2025 01:00:34 +0000 (0:00:00.892) 0:00:03.500 ************ 2025-05-04 01:02:05.760176 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760190 | orchestrator | 2025-05-04 01:02:05.760204 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-04 01:02:05.760219 | orchestrator | Sunday 04 May 2025 01:00:35 +0000 (0:00:00.917) 0:00:04.417 ************ 2025-05-04 01:02:05.760234 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760248 | orchestrator | 2025-05-04 01:02:05.760262 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-04 01:02:05.760281 | orchestrator | Sunday 04 May 2025 01:00:36 +0000 (0:00:00.898) 0:00:05.315 ************ 2025-05-04 01:02:05.760295 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760310 | orchestrator | 2025-05-04 01:02:05.760324 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-04 01:02:05.760338 | orchestrator | Sunday 04 May 2025 01:00:37 +0000 (0:00:00.866) 0:00:06.182 ************ 2025-05-04 01:02:05.760352 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760366 | orchestrator | 2025-05-04 01:02:05.760381 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-04 01:02:05.760415 | orchestrator | Sunday 04 May 2025 01:00:38 +0000 (0:00:01.363) 0:00:07.546 ************ 2025-05-04 01:02:05.760430 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760445 | orchestrator | 2025-05-04 01:02:05.760459 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-04 01:02:05.760477 | orchestrator | Sunday 04 May 2025 01:00:39 +0000 (0:00:01.200) 0:00:08.747 ************ 2025-05-04 01:02:05.760495 | orchestrator | changed: [testbed-manager] 2025-05-04 01:02:05.760511 | orchestrator | 2025-05-04 01:02:05.760527 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-04 01:02:05.760543 | orchestrator | Sunday 04 May 2025 01:00:57 +0000 (0:00:17.192) 0:00:25.939 ************ 2025-05-04 01:02:05.760559 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:02:05.760575 | orchestrator | 2025-05-04 01:02:05.760592 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-04 01:02:05.760608 | orchestrator | 2025-05-04 01:02:05.760625 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-04 01:02:05.760640 | orchestrator | Sunday 04 May 2025 01:00:57 +0000 (0:00:00.490) 0:00:26.430 ************ 2025-05-04 01:02:05.760656 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.760672 | orchestrator | 2025-05-04 01:02:05.760689 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-04 01:02:05.760705 | orchestrator | 2025-05-04 01:02:05.760721 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-04 01:02:05.760737 | orchestrator | Sunday 04 May 2025 01:00:59 +0000 (0:00:01.928) 0:00:28.358 ************ 2025-05-04 01:02:05.760753 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:05.760770 | orchestrator | 2025-05-04 01:02:05.760786 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-04 01:02:05.760803 | orchestrator | 2025-05-04 01:02:05.760819 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-04 01:02:05.760833 | orchestrator | Sunday 04 May 2025 01:01:01 +0000 (0:00:01.676) 0:00:30.035 ************ 2025-05-04 01:02:05.760847 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:05.760876 | orchestrator | 2025-05-04 01:02:05.760891 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:02:05.760907 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-04 01:02:05.760922 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:02:05.760936 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:02:05.760951 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:02:05.760965 | orchestrator | 2025-05-04 01:02:05.760979 | orchestrator | 2025-05-04 01:02:05.760993 | orchestrator | 2025-05-04 01:02:05.761008 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:02:05.761022 | orchestrator | Sunday 04 May 2025 01:01:02 +0000 (0:00:01.402) 0:00:31.438 ************ 2025-05-04 01:02:05.761036 | orchestrator | =============================================================================== 2025-05-04 01:02:05.761050 | orchestrator | Create admin user ------------------------------------------------------ 17.19s 2025-05-04 01:02:05.761074 | orchestrator | Restart ceph manager service -------------------------------------------- 5.01s 2025-05-04 01:02:05.761090 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.36s 2025-05-04 01:02:05.761104 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.30s 2025-05-04 01:02:05.761119 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.20s 2025-05-04 01:02:05.761133 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.92s 2025-05-04 01:02:05.761155 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2025-05-04 01:02:05.761169 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.90s 2025-05-04 01:02:05.761184 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.89s 2025-05-04 01:02:05.761198 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2025-05-04 01:02:05.761217 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.49s 2025-05-04 01:02:05.761231 | orchestrator | 2025-05-04 01:02:05.761245 | orchestrator | 2025-05-04 01:02:05.761260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:02:05.761273 | orchestrator | 2025-05-04 01:02:05.761288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:02:05.761302 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.392) 0:00:00.392 ************ 2025-05-04 01:02:05.761316 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:02:05.761330 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:02:05.761344 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:02:05.761359 | orchestrator | 2025-05-04 01:02:05.761373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:02:05.761387 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.493) 0:00:00.886 ************ 2025-05-04 01:02:05.761401 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-04 01:02:05.761416 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-04 01:02:05.761430 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-04 01:02:05.761444 | orchestrator | 2025-05-04 01:02:05.761458 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-04 01:02:05.761472 | orchestrator | 2025-05-04 01:02:05.761486 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-04 01:02:05.761500 | orchestrator | Sunday 04 May 2025 00:59:41 +0000 (0:00:00.376) 0:00:01.262 ************ 2025-05-04 01:02:05.761514 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:02:05.761529 | orchestrator | 2025-05-04 01:02:05.761543 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-04 01:02:05.761558 | orchestrator | Sunday 04 May 2025 00:59:41 +0000 (0:00:00.675) 0:00:01.938 ************ 2025-05-04 01:02:05.761572 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-04 01:02:05.761586 | orchestrator | 2025-05-04 01:02:05.761600 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-04 01:02:05.761614 | orchestrator | Sunday 04 May 2025 00:59:45 +0000 (0:00:03.389) 0:00:05.327 ************ 2025-05-04 01:02:05.761628 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-04 01:02:05.761642 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-04 01:02:05.761656 | orchestrator | 2025-05-04 01:02:05.761671 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-04 01:02:05.761685 | orchestrator | Sunday 04 May 2025 00:59:51 +0000 (0:00:06.072) 0:00:11.399 ************ 2025-05-04 01:02:05.761699 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2025-05-04 01:02:05.761713 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:02:05.761728 | orchestrator | 2025-05-04 01:02:05.761742 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-04 01:02:05.761756 | orchestrator | Sunday 04 May 2025 01:00:07 +0000 (0:00:16.294) 0:00:27.693 ************ 2025-05-04 01:02:05.761771 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:02:05.761785 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-04 01:02:05.761799 | orchestrator | 2025-05-04 01:02:05.761813 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-04 01:02:05.761833 | orchestrator | Sunday 04 May 2025 01:00:11 +0000 (0:00:03.889) 0:00:31.583 ************ 2025-05-04 01:02:05.761948 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:02:05.761968 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-04 01:02:05.761983 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-04 01:02:05.761997 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-04 01:02:05.762012 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-04 01:02:05.762089 | orchestrator | 2025-05-04 01:02:05.762104 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-04 01:02:05.762118 | orchestrator | Sunday 04 May 2025 01:00:27 +0000 (0:00:15.426) 0:00:47.010 ************ 2025-05-04 01:02:05.762132 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-04 01:02:05.762147 | orchestrator | 2025-05-04 01:02:05.762161 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-04 01:02:05.762175 | orchestrator | Sunday 04 May 2025 01:00:32 +0000 (0:00:05.001) 0:00:52.012 ************ 2025-05-04 01:02:05.762212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.762234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.762250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.762298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762382 | orchestrator | 2025-05-04 01:02:05.762397 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-04 01:02:05.762412 | orchestrator | Sunday 04 May 2025 01:00:34 +0000 (0:00:02.342) 0:00:54.354 ************ 2025-05-04 01:02:05.762426 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-04 01:02:05.762440 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-04 01:02:05.762454 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-04 01:02:05.762474 | orchestrator | 2025-05-04 01:02:05.762500 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-04 01:02:05.762527 | orchestrator | Sunday 04 May 2025 01:00:37 +0000 (0:00:03.475) 0:00:57.829 ************ 2025-05-04 01:02:05.762553 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.762570 | orchestrator | 2025-05-04 01:02:05.762585 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-04 01:02:05.762599 | orchestrator | Sunday 04 May 2025 01:00:37 +0000 (0:00:00.079) 0:00:57.909 ************ 2025-05-04 01:02:05.762613 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.762628 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:05.762642 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:05.762656 | orchestrator | 2025-05-04 01:02:05.762671 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-04 01:02:05.762685 | orchestrator | Sunday 04 May 2025 01:00:38 +0000 (0:00:00.343) 0:00:58.252 ************ 2025-05-04 01:02:05.762699 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:02:05.762713 | orchestrator | 2025-05-04 01:02:05.762734 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-04 01:02:05.762794 | orchestrator | Sunday 04 May 2025 01:00:39 +0000 (0:00:01.284) 0:00:59.537 ************ 2025-05-04 01:02:05.762820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.762837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.762914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.762933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.762986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763043 | orchestrator | 2025-05-04 01:02:05.763058 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-04 01:02:05.763073 | orchestrator | Sunday 04 May 2025 01:00:44 +0000 (0:00:04.667) 0:01:04.204 ************ 2025-05-04 01:02:05.763088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.763118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763149 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.763163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.763187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763214 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:05.763233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.763247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763281 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:05.763294 | orchestrator | 2025-05-04 01:02:05.763307 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-04 01:02:05.763320 | orchestrator | Sunday 04 May 2025 01:00:45 +0000 (0:00:01.212) 0:01:05.417 ************ 2025-05-04 01:02:05.763333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.763347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763380 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.763393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.763412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763439 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:05.763452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.763471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.763503 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:05.763516 | orchestrator | 2025-05-04 01:02:05.763529 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-04 01:02:05.763541 | orchestrator | Sunday 04 May 2025 01:00:46 +0000 (0:00:00.909) 0:01:06.327 ************ 2025-05-04 01:02:05.763555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.763569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.763583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.763603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.763689 | orchestrator | 2025-05-04 01:02:05.763702 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-04 01:02:05.763715 | orchestrator | Sunday 04 May 2025 01:00:50 +0000 (0:00:03.949) 0:01:10.277 ************ 2025-05-04 01:02:05.763728 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.763740 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:05.763753 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:05.763766 | orchestrator | 2025-05-04 01:02:05.763779 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-04 01:02:05.763796 | orchestrator | Sunday 04 May 2025 01:00:53 +0000 (0:00:03.106) 0:01:13.384 ************ 2025-05-04 01:02:05.763815 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:02:05.763828 | orchestrator | 2025-05-04 01:02:05.763841 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-04 01:02:05.763854 | orchestrator | Sunday 04 May 2025 01:00:55 +0000 (0:00:02.049) 0:01:15.433 ************ 2025-05-04 01:02:05.763881 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.763894 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:05.763907 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:05.763920 | orchestrator | 2025-05-04 01:02:05.763932 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-04 01:02:05.763945 | orchestrator | Sunday 04 May 2025 01:00:57 +0000 (0:00:02.246) 0:01:17.679 ************ 2025-05-04 01:02:05.763958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.763972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.763986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.764006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764128 | orchestrator | 2025-05-04 01:02:05.764148 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-04 01:02:05.764168 | orchestrator | Sunday 04 May 2025 01:01:09 +0000 (0:00:11.550) 0:01:29.229 ************ 2025-05-04 01:02:05.764237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.764265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.764288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.764310 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.764333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.764349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.764378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.764392 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:05.764405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-04 01:02:05.764419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.764433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:05.764446 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:05.764459 | orchestrator | 2025-05-04 01:02:05.764472 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-04 01:02:05.764485 | orchestrator | Sunday 04 May 2025 01:01:10 +0000 (0:00:01.486) 0:01:30.716 ************ 2025-05-04 01:02:05.764498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.764525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.764567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-04 01:02:05.764582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:05.764687 | orchestrator | 2025-05-04 01:02:05.764700 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-04 01:02:05.764713 | orchestrator | Sunday 04 May 2025 01:01:14 +0000 (0:00:03.583) 0:01:34.299 ************ 2025-05-04 01:02:05.764726 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:05.764739 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:05.764752 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:05.764764 | orchestrator | 2025-05-04 01:02:05.764777 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-04 01:02:05.764790 | orchestrator | Sunday 04 May 2025 01:01:14 +0000 (0:00:00.496) 0:01:34.796 ************ 2025-05-04 01:02:05.764803 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.764815 | orchestrator | 2025-05-04 01:02:05.764828 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-04 01:02:05.764841 | orchestrator | Sunday 04 May 2025 01:01:17 +0000 (0:00:02.817) 0:01:37.614 ************ 2025-05-04 01:02:05.764853 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.764884 | orchestrator | 2025-05-04 01:02:05.764897 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-04 01:02:05.764910 | orchestrator | Sunday 04 May 2025 01:01:20 +0000 (0:00:02.447) 0:01:40.061 ************ 2025-05-04 01:02:05.764922 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.764934 | orchestrator | 2025-05-04 01:02:05.764947 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-04 01:02:05.764965 | orchestrator | Sunday 04 May 2025 01:01:31 +0000 (0:00:11.022) 0:01:51.084 ************ 2025-05-04 01:02:05.764978 | orchestrator | 2025-05-04 01:02:05.764990 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-04 01:02:05.765003 | orchestrator | Sunday 04 May 2025 01:01:31 +0000 (0:00:00.184) 0:01:51.269 ************ 2025-05-04 01:02:05.765016 | orchestrator | 2025-05-04 01:02:05.765033 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-04 01:02:05.765046 | orchestrator | Sunday 04 May 2025 01:01:31 +0000 (0:00:00.423) 0:01:51.693 ************ 2025-05-04 01:02:05.765058 | orchestrator | 2025-05-04 01:02:05.765071 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-04 01:02:05.765084 | orchestrator | Sunday 04 May 2025 01:01:31 +0000 (0:00:00.068) 0:01:51.761 ************ 2025-05-04 01:02:05.765096 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:05.765109 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:05.765121 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.765142 | orchestrator | 2025-05-04 01:02:05.765155 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-04 01:02:05.765167 | orchestrator | Sunday 04 May 2025 01:01:40 +0000 (0:00:08.671) 0:02:00.433 ************ 2025-05-04 01:02:05.765180 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.765193 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:05.765206 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:05.765218 | orchestrator | 2025-05-04 01:02:05.765231 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-04 01:02:05.765243 | orchestrator | Sunday 04 May 2025 01:01:50 +0000 (0:00:10.497) 0:02:10.930 ************ 2025-05-04 01:02:05.765256 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:05.765268 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:05.765281 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:05.765293 | orchestrator | 2025-05-04 01:02:05.765306 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:02:05.765319 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:02:05.765332 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 01:02:05.765345 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 01:02:05.765358 | orchestrator | 2025-05-04 01:02:05.765371 | orchestrator | 2025-05-04 01:02:05.765390 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:02:08.793104 | orchestrator | Sunday 04 May 2025 01:02:04 +0000 (0:00:13.596) 0:02:24.526 ************ 2025-05-04 01:02:08.793207 | orchestrator | =============================================================================== 2025-05-04 01:02:08.793238 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 16.29s 2025-05-04 01:02:08.793253 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.43s 2025-05-04 01:02:08.793266 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.60s 2025-05-04 01:02:08.793278 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.55s 2025-05-04 01:02:08.793291 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.02s 2025-05-04 01:02:08.793303 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.50s 2025-05-04 01:02:08.793316 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.67s 2025-05-04 01:02:08.793329 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.07s 2025-05-04 01:02:08.793342 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.00s 2025-05-04 01:02:08.793381 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.67s 2025-05-04 01:02:08.793395 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.95s 2025-05-04 01:02:08.793408 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.89s 2025-05-04 01:02:08.793421 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.58s 2025-05-04 01:02:08.793433 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.48s 2025-05-04 01:02:08.793446 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.39s 2025-05-04 01:02:08.793459 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.11s 2025-05-04 01:02:08.793471 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.82s 2025-05-04 01:02:08.793484 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.45s 2025-05-04 01:02:08.793497 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.34s 2025-05-04 01:02:08.793522 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.25s 2025-05-04 01:02:08.793536 | orchestrator | 2025-05-04 01:02:05 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:08.793554 | orchestrator | 2025-05-04 01:02:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:08.793567 | orchestrator | 2025-05-04 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:08.793596 | orchestrator | 2025-05-04 01:02:08 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:08.793805 | orchestrator | 2025-05-04 01:02:08 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:08.793959 | orchestrator | 2025-05-04 01:02:08 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:08.794361 | orchestrator | 2025-05-04 01:02:08 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:08.795196 | orchestrator | 2025-05-04 01:02:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:11.823045 | orchestrator | 2025-05-04 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:11.823174 | orchestrator | 2025-05-04 01:02:11 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:11.823352 | orchestrator | 2025-05-04 01:02:11 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:11.823386 | orchestrator | 2025-05-04 01:02:11 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:11.827136 | orchestrator | 2025-05-04 01:02:11 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:11.827732 | orchestrator | 2025-05-04 01:02:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:14.871625 | orchestrator | 2025-05-04 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:14.871761 | orchestrator | 2025-05-04 01:02:14 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:14.873133 | orchestrator | 2025-05-04 01:02:14 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:14.873686 | orchestrator | 2025-05-04 01:02:14 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:14.874755 | orchestrator | 2025-05-04 01:02:14 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:14.875996 | orchestrator | 2025-05-04 01:02:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:17.920756 | orchestrator | 2025-05-04 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:17.920955 | orchestrator | 2025-05-04 01:02:17 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:17.921729 | orchestrator | 2025-05-04 01:02:17 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:17.923724 | orchestrator | 2025-05-04 01:02:17 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:17.925398 | orchestrator | 2025-05-04 01:02:17 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:17.927528 | orchestrator | 2025-05-04 01:02:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:20.974743 | orchestrator | 2025-05-04 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:20.974924 | orchestrator | 2025-05-04 01:02:20 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:20.976556 | orchestrator | 2025-05-04 01:02:20 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:20.978711 | orchestrator | 2025-05-04 01:02:20 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:20.980826 | orchestrator | 2025-05-04 01:02:20 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:20.982726 | orchestrator | 2025-05-04 01:02:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:24.038235 | orchestrator | 2025-05-04 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:24.038391 | orchestrator | 2025-05-04 01:02:24 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:27.092463 | orchestrator | 2025-05-04 01:02:24 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:27.092639 | orchestrator | 2025-05-04 01:02:24 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:27.092676 | orchestrator | 2025-05-04 01:02:24 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:27.092702 | orchestrator | 2025-05-04 01:02:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:27.092770 | orchestrator | 2025-05-04 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:27.092818 | orchestrator | 2025-05-04 01:02:27 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:27.094545 | orchestrator | 2025-05-04 01:02:27 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:27.097731 | orchestrator | 2025-05-04 01:02:27 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:27.100197 | orchestrator | 2025-05-04 01:02:27 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:27.101078 | orchestrator | 2025-05-04 01:02:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:27.101236 | orchestrator | 2025-05-04 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:30.148578 | orchestrator | 2025-05-04 01:02:30 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:30.150073 | orchestrator | 2025-05-04 01:02:30 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:30.151636 | orchestrator | 2025-05-04 01:02:30 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:30.153680 | orchestrator | 2025-05-04 01:02:30 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:30.155438 | orchestrator | 2025-05-04 01:02:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:30.155658 | orchestrator | 2025-05-04 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:33.211156 | orchestrator | 2025-05-04 01:02:33 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:33.212448 | orchestrator | 2025-05-04 01:02:33 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:33.213841 | orchestrator | 2025-05-04 01:02:33 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:33.215146 | orchestrator | 2025-05-04 01:02:33 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:33.216728 | orchestrator | 2025-05-04 01:02:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:36.270353 | orchestrator | 2025-05-04 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:36.270512 | orchestrator | 2025-05-04 01:02:36 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:36.270711 | orchestrator | 2025-05-04 01:02:36 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:36.270745 | orchestrator | 2025-05-04 01:02:36 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:36.271451 | orchestrator | 2025-05-04 01:02:36 | INFO  | Task 83b48686-3ea3-4d64-8e38-2199156ee7e2 is in state STARTED 2025-05-04 01:02:36.273987 | orchestrator | 2025-05-04 01:02:36 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:36.275111 | orchestrator | 2025-05-04 01:02:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:39.328128 | orchestrator | 2025-05-04 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:39.328255 | orchestrator | 2025-05-04 01:02:39 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:39.329948 | orchestrator | 2025-05-04 01:02:39 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:39.332114 | orchestrator | 2025-05-04 01:02:39 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:39.333666 | orchestrator | 2025-05-04 01:02:39 | INFO  | Task 83b48686-3ea3-4d64-8e38-2199156ee7e2 is in state STARTED 2025-05-04 01:02:39.335050 | orchestrator | 2025-05-04 01:02:39 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:39.336592 | orchestrator | 2025-05-04 01:02:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:42.377709 | orchestrator | 2025-05-04 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:42.377863 | orchestrator | 2025-05-04 01:02:42 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:42.378278 | orchestrator | 2025-05-04 01:02:42 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:42.378311 | orchestrator | 2025-05-04 01:02:42 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:42.378334 | orchestrator | 2025-05-04 01:02:42 | INFO  | Task 83b48686-3ea3-4d64-8e38-2199156ee7e2 is in state STARTED 2025-05-04 01:02:42.378934 | orchestrator | 2025-05-04 01:02:42 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:42.379620 | orchestrator | 2025-05-04 01:02:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:42.379812 | orchestrator | 2025-05-04 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:45.428548 | orchestrator | 2025-05-04 01:02:45 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:45.430213 | orchestrator | 2025-05-04 01:02:45 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:45.431207 | orchestrator | 2025-05-04 01:02:45 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:45.432168 | orchestrator | 2025-05-04 01:02:45 | INFO  | Task 83b48686-3ea3-4d64-8e38-2199156ee7e2 is in state STARTED 2025-05-04 01:02:45.432833 | orchestrator | 2025-05-04 01:02:45 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state STARTED 2025-05-04 01:02:45.433922 | orchestrator | 2025-05-04 01:02:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:48.480158 | orchestrator | 2025-05-04 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:48.480309 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:48.481680 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:48.483549 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:48.484301 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 83b48686-3ea3-4d64-8e38-2199156ee7e2 is in state SUCCESS 2025-05-04 01:02:48.487160 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 62acd915-f9e8-4fd1-831d-0aec88efec1a is in state SUCCESS 2025-05-04 01:02:48.490583 | orchestrator | 2025-05-04 01:02:48.490682 | orchestrator | None 2025-05-04 01:02:48.490703 | orchestrator | 2025-05-04 01:02:48.490720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:02:48.490737 | orchestrator | 2025-05-04 01:02:48.490752 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:02:48.490769 | orchestrator | Sunday 04 May 2025 00:59:39 +0000 (0:00:00.399) 0:00:00.399 ************ 2025-05-04 01:02:48.490784 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:02:48.490802 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:02:48.490817 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:02:48.490833 | orchestrator | 2025-05-04 01:02:48.490848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:02:48.490864 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.440) 0:00:00.840 ************ 2025-05-04 01:02:48.490906 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-04 01:02:48.490924 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-04 01:02:48.490938 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-04 01:02:48.490953 | orchestrator | 2025-05-04 01:02:48.490967 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-04 01:02:48.490981 | orchestrator | 2025-05-04 01:02:48.490996 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-04 01:02:48.491010 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.467) 0:00:01.307 ************ 2025-05-04 01:02:48.491025 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:02:48.491041 | orchestrator | 2025-05-04 01:02:48.491056 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-04 01:02:48.491070 | orchestrator | Sunday 04 May 2025 00:59:41 +0000 (0:00:00.913) 0:00:02.221 ************ 2025-05-04 01:02:48.491085 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-04 01:02:48.491100 | orchestrator | 2025-05-04 01:02:48.491149 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-04 01:02:48.491166 | orchestrator | Sunday 04 May 2025 00:59:44 +0000 (0:00:03.462) 0:00:05.684 ************ 2025-05-04 01:02:48.491182 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-04 01:02:48.491199 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-04 01:02:48.491215 | orchestrator | 2025-05-04 01:02:48.491231 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-04 01:02:48.491248 | orchestrator | Sunday 04 May 2025 00:59:51 +0000 (0:00:06.401) 0:00:12.085 ************ 2025-05-04 01:02:48.491264 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-04 01:02:48.491281 | orchestrator | 2025-05-04 01:02:48.491298 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-04 01:02:48.491352 | orchestrator | Sunday 04 May 2025 00:59:54 +0000 (0:00:03.684) 0:00:15.769 ************ 2025-05-04 01:02:48.491370 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:02:48.491386 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-04 01:02:48.491403 | orchestrator | 2025-05-04 01:02:48.491419 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-04 01:02:48.491435 | orchestrator | Sunday 04 May 2025 00:59:58 +0000 (0:00:03.766) 0:00:19.535 ************ 2025-05-04 01:02:48.491452 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:02:48.491469 | orchestrator | 2025-05-04 01:02:48.491484 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-04 01:02:48.491498 | orchestrator | Sunday 04 May 2025 01:00:02 +0000 (0:00:03.272) 0:00:22.807 ************ 2025-05-04 01:02:48.491513 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-04 01:02:48.491527 | orchestrator | 2025-05-04 01:02:48.491541 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-04 01:02:48.491555 | orchestrator | Sunday 04 May 2025 01:00:06 +0000 (0:00:04.098) 0:00:26.906 ************ 2025-05-04 01:02:48.491572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.491604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.491621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.491646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.491913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.491944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.491966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.491988 | orchestrator | 2025-05-04 01:02:48.492003 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-04 01:02:48.492018 | orchestrator | Sunday 04 May 2025 01:00:09 +0000 (0:00:02.936) 0:00:29.843 ************ 2025-05-04 01:02:48.492033 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.492048 | orchestrator | 2025-05-04 01:02:48.492063 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-04 01:02:48.492077 | orchestrator | Sunday 04 May 2025 01:00:09 +0000 (0:00:00.118) 0:00:29.961 ************ 2025-05-04 01:02:48.492091 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.492105 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:48.492120 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:48.492134 | orchestrator | 2025-05-04 01:02:48.492149 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-04 01:02:48.492164 | orchestrator | Sunday 04 May 2025 01:00:09 +0000 (0:00:00.419) 0:00:30.381 ************ 2025-05-04 01:02:48.492192 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:02:48.492207 | orchestrator | 2025-05-04 01:02:48.492222 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-04 01:02:48.492236 | orchestrator | Sunday 04 May 2025 01:00:10 +0000 (0:00:00.595) 0:00:30.976 ************ 2025-05-04 01:02:48.492251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.492266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.492281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.492311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.492567 | orchestrator | 2025-05-04 01:02:48.492582 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-04 01:02:48.492601 | orchestrator | Sunday 04 May 2025 01:00:16 +0000 (0:00:06.465) 0:00:37.442 ************ 2025-05-04 01:02:48.492617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.492632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.492648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492722 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:48.492737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.492752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.492767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492851 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.492866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.492897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.492914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.492989 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:48.493003 | orchestrator | 2025-05-04 01:02:48.493018 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-04 01:02:48.493032 | orchestrator | Sunday 04 May 2025 01:00:18 +0000 (0:00:02.091) 0:00:39.533 ************ 2025-05-04 01:02:48.493047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.493062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.493077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493150 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.493164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.493180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.493206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493273 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:48.493288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.493303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.493326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493394 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:48.493409 | orchestrator | 2025-05-04 01:02:48.493423 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-04 01:02:48.493437 | orchestrator | Sunday 04 May 2025 01:00:19 +0000 (0:00:01.048) 0:00:40.582 ************ 2025-05-04 01:02:48.493452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.493467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.493488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.493504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.493805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.493820 | orchestrator | 2025-05-04 01:02:48.493834 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-04 01:02:48.493849 | orchestrator | Sunday 04 May 2025 01:00:26 +0000 (0:00:06.596) 0:00:47.178 ************ 2025-05-04 01:02:48.493864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.493879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.493978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.493991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494390 | orchestrator | 2025-05-04 01:02:48.494404 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-04 01:02:48.494417 | orchestrator | Sunday 04 May 2025 01:00:49 +0000 (0:00:23.479) 0:01:10.658 ************ 2025-05-04 01:02:48.494430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-04 01:02:48.494442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-04 01:02:48.494455 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-04 01:02:48.494468 | orchestrator | 2025-05-04 01:02:48.494481 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-04 01:02:48.494494 | orchestrator | Sunday 04 May 2025 01:00:57 +0000 (0:00:07.296) 0:01:17.954 ************ 2025-05-04 01:02:48.494507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-04 01:02:48.494540 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-04 01:02:48.494554 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-04 01:02:48.494566 | orchestrator | 2025-05-04 01:02:48.494579 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-04 01:02:48.494592 | orchestrator | Sunday 04 May 2025 01:01:02 +0000 (0:00:05.460) 0:01:23.415 ************ 2025-05-04 01:02:48.494605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.494619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.494633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.494646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.494939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.494953 | orchestrator | 2025-05-04 01:02:48.494966 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-04 01:02:48.494979 | orchestrator | Sunday 04 May 2025 01:01:06 +0000 (0:00:04.064) 0:01:27.479 ************ 2025-05-04 01:02:48.494992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.495005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.495034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.495047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495322 | orchestrator | 2025-05-04 01:02:48.495334 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-04 01:02:48.495347 | orchestrator | Sunday 04 May 2025 01:01:10 +0000 (0:00:03.536) 0:01:31.016 ************ 2025-05-04 01:02:48.495360 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.495373 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:48.495386 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:48.495398 | orchestrator | 2025-05-04 01:02:48.495411 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-04 01:02:48.495424 | orchestrator | Sunday 04 May 2025 01:01:10 +0000 (0:00:00.766) 0:01:31.782 ************ 2025-05-04 01:02:48.495443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.495458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.495472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495551 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.495564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.495578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.495591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495671 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:48.495684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-04 01:02:48.495697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-04 01:02:48.495716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.495790 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:48.495803 | orchestrator | 2025-05-04 01:02:48.495816 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-04 01:02:48.495829 | orchestrator | Sunday 04 May 2025 01:01:12 +0000 (0:00:01.134) 0:01:32.917 ************ 2025-05-04 01:02:48.495842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.495862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.495875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-04 01:02:48.495912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.495987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.496188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.496222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-04 01:02:48.496236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-04 01:02:48.496249 | orchestrator | 2025-05-04 01:02:48.496262 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-04 01:02:48.496275 | orchestrator | Sunday 04 May 2025 01:01:17 +0000 (0:00:05.295) 0:01:38.212 ************ 2025-05-04 01:02:48.496288 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:02:48.496301 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:02:48.496314 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:02:48.496327 | orchestrator | 2025-05-04 01:02:48.496340 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-04 01:02:48.496352 | orchestrator | Sunday 04 May 2025 01:01:18 +0000 (0:00:00.934) 0:01:39.146 ************ 2025-05-04 01:02:48.496365 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-04 01:02:48.496378 | orchestrator | 2025-05-04 01:02:48.496391 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-04 01:02:48.496403 | orchestrator | Sunday 04 May 2025 01:01:20 +0000 (0:00:02.153) 0:01:41.299 ************ 2025-05-04 01:02:48.496416 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 01:02:48.496429 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-04 01:02:48.496443 | orchestrator | 2025-05-04 01:02:48.496455 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-04 01:02:48.496468 | orchestrator | Sunday 04 May 2025 01:01:22 +0000 (0:00:02.459) 0:01:43.759 ************ 2025-05-04 01:02:48.496481 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.496494 | orchestrator | 2025-05-04 01:02:48.496506 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-04 01:02:48.496519 | orchestrator | Sunday 04 May 2025 01:01:36 +0000 (0:00:13.976) 0:01:57.736 ************ 2025-05-04 01:02:48.496532 | orchestrator | 2025-05-04 01:02:48.496544 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-04 01:02:48.496562 | orchestrator | Sunday 04 May 2025 01:01:37 +0000 (0:00:00.077) 0:01:57.813 ************ 2025-05-04 01:02:48.496575 | orchestrator | 2025-05-04 01:02:48.496588 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-04 01:02:48.496608 | orchestrator | Sunday 04 May 2025 01:01:37 +0000 (0:00:00.062) 0:01:57.876 ************ 2025-05-04 01:02:48.496629 | orchestrator | 2025-05-04 01:02:48.496651 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-04 01:02:48.496682 | orchestrator | Sunday 04 May 2025 01:01:37 +0000 (0:00:00.059) 0:01:57.935 ************ 2025-05-04 01:02:48.496703 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.496722 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:48.496740 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:48.496761 | orchestrator | 2025-05-04 01:02:48.496783 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-04 01:02:48.496803 | orchestrator | Sunday 04 May 2025 01:01:51 +0000 (0:00:13.892) 0:02:11.828 ************ 2025-05-04 01:02:48.496824 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:48.496844 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:48.496864 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.496920 | orchestrator | 2025-05-04 01:02:48.496944 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-04 01:02:48.496959 | orchestrator | Sunday 04 May 2025 01:02:02 +0000 (0:00:11.685) 0:02:23.513 ************ 2025-05-04 01:02:48.496972 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.496984 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:48.496997 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:48.497011 | orchestrator | 2025-05-04 01:02:48.497024 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-04 01:02:48.497036 | orchestrator | Sunday 04 May 2025 01:02:14 +0000 (0:00:11.759) 0:02:35.273 ************ 2025-05-04 01:02:48.497049 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:48.497063 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.497075 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:48.497088 | orchestrator | 2025-05-04 01:02:48.497101 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-04 01:02:48.497113 | orchestrator | Sunday 04 May 2025 01:02:25 +0000 (0:00:10.669) 0:02:45.943 ************ 2025-05-04 01:02:48.497126 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:48.497138 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:48.497151 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.497164 | orchestrator | 2025-05-04 01:02:48.497177 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-04 01:02:48.497190 | orchestrator | Sunday 04 May 2025 01:02:33 +0000 (0:00:08.764) 0:02:54.707 ************ 2025-05-04 01:02:48.497203 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.497215 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:02:48.497227 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:02:48.497240 | orchestrator | 2025-05-04 01:02:48.497253 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-04 01:02:48.497265 | orchestrator | Sunday 04 May 2025 01:02:40 +0000 (0:00:06.543) 0:03:01.250 ************ 2025-05-04 01:02:48.497278 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:02:48.497292 | orchestrator | 2025-05-04 01:02:48.497313 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:02:48.497335 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:02:48.497358 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 01:02:48.497379 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 01:02:48.497397 | orchestrator | 2025-05-04 01:02:48.497411 | orchestrator | 2025-05-04 01:02:48.497424 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:02:48.497437 | orchestrator | Sunday 04 May 2025 01:02:45 +0000 (0:00:05.384) 0:03:06.635 ************ 2025-05-04 01:02:48.497449 | orchestrator | =============================================================================== 2025-05-04 01:02:48.497462 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.48s 2025-05-04 01:02:48.497475 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.98s 2025-05-04 01:02:48.497497 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.89s 2025-05-04 01:02:48.497511 | orchestrator | designate : Restart designate-central container ------------------------ 11.76s 2025-05-04 01:02:48.497523 | orchestrator | designate : Restart designate-api container ---------------------------- 11.69s 2025-05-04 01:02:48.497536 | orchestrator | designate : Restart designate-producer container ----------------------- 10.67s 2025-05-04 01:02:48.497549 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.76s 2025-05-04 01:02:48.497561 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.30s 2025-05-04 01:02:48.497574 | orchestrator | designate : Copying over config.json files for services ----------------- 6.60s 2025-05-04 01:02:48.497586 | orchestrator | designate : Restart designate-worker container -------------------------- 6.54s 2025-05-04 01:02:48.497599 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.47s 2025-05-04 01:02:48.497611 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.40s 2025-05-04 01:02:48.497624 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.46s 2025-05-04 01:02:48.497637 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.38s 2025-05-04 01:02:48.497649 | orchestrator | designate : Check designate containers ---------------------------------- 5.30s 2025-05-04 01:02:48.497662 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.10s 2025-05-04 01:02:48.497674 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.06s 2025-05-04 01:02:48.497696 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.77s 2025-05-04 01:02:48.497869 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.68s 2025-05-04 01:02:48.497941 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.54s 2025-05-04 01:02:48.497961 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:02:51.547554 | orchestrator | 2025-05-04 01:02:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:51.547683 | orchestrator | 2025-05-04 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:51.547720 | orchestrator | 2025-05-04 01:02:51 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:51.549512 | orchestrator | 2025-05-04 01:02:51 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:51.553358 | orchestrator | 2025-05-04 01:02:51 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:51.555455 | orchestrator | 2025-05-04 01:02:51 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:02:51.556741 | orchestrator | 2025-05-04 01:02:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:54.615623 | orchestrator | 2025-05-04 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:54.615781 | orchestrator | 2025-05-04 01:02:54 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:54.618195 | orchestrator | 2025-05-04 01:02:54 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:54.622089 | orchestrator | 2025-05-04 01:02:54 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:54.623602 | orchestrator | 2025-05-04 01:02:54 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:02:54.624921 | orchestrator | 2025-05-04 01:02:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:57.684283 | orchestrator | 2025-05-04 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:02:57.684539 | orchestrator | 2025-05-04 01:02:57 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:02:57.685913 | orchestrator | 2025-05-04 01:02:57 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:02:57.687428 | orchestrator | 2025-05-04 01:02:57 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:02:57.688941 | orchestrator | 2025-05-04 01:02:57 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:02:57.690264 | orchestrator | 2025-05-04 01:02:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:02:57.690588 | orchestrator | 2025-05-04 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:00.742417 | orchestrator | 2025-05-04 01:03:00 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:00.744257 | orchestrator | 2025-05-04 01:03:00 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:00.748083 | orchestrator | 2025-05-04 01:03:00 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:00.750189 | orchestrator | 2025-05-04 01:03:00 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:00.757790 | orchestrator | 2025-05-04 01:03:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:00.757945 | orchestrator | 2025-05-04 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:03.822648 | orchestrator | 2025-05-04 01:03:03 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:06.863860 | orchestrator | 2025-05-04 01:03:03 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:06.864078 | orchestrator | 2025-05-04 01:03:03 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:06.864271 | orchestrator | 2025-05-04 01:03:03 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:06.864299 | orchestrator | 2025-05-04 01:03:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:06.864330 | orchestrator | 2025-05-04 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:06.864386 | orchestrator | 2025-05-04 01:03:06 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:06.866371 | orchestrator | 2025-05-04 01:03:06 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:06.866422 | orchestrator | 2025-05-04 01:03:06 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:06.870860 | orchestrator | 2025-05-04 01:03:06 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:06.874952 | orchestrator | 2025-05-04 01:03:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:09.924786 | orchestrator | 2025-05-04 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:09.925042 | orchestrator | 2025-05-04 01:03:09 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:09.925508 | orchestrator | 2025-05-04 01:03:09 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:09.926595 | orchestrator | 2025-05-04 01:03:09 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:09.928321 | orchestrator | 2025-05-04 01:03:09 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:09.930780 | orchestrator | 2025-05-04 01:03:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:12.984993 | orchestrator | 2025-05-04 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:12.985162 | orchestrator | 2025-05-04 01:03:12 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:12.987205 | orchestrator | 2025-05-04 01:03:12 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:12.988513 | orchestrator | 2025-05-04 01:03:12 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:12.989828 | orchestrator | 2025-05-04 01:03:12 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:12.991557 | orchestrator | 2025-05-04 01:03:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:16.057806 | orchestrator | 2025-05-04 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:16.058086 | orchestrator | 2025-05-04 01:03:16 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:16.058799 | orchestrator | 2025-05-04 01:03:16 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:16.061136 | orchestrator | 2025-05-04 01:03:16 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:16.063205 | orchestrator | 2025-05-04 01:03:16 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:16.064728 | orchestrator | 2025-05-04 01:03:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:16.064868 | orchestrator | 2025-05-04 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:19.123885 | orchestrator | 2025-05-04 01:03:19 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:19.125823 | orchestrator | 2025-05-04 01:03:19 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state STARTED 2025-05-04 01:03:19.127369 | orchestrator | 2025-05-04 01:03:19 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:19.129380 | orchestrator | 2025-05-04 01:03:19 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:19.130231 | orchestrator | 2025-05-04 01:03:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:22.178622 | orchestrator | 2025-05-04 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:22.178758 | orchestrator | 2025-05-04 01:03:22 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:22.180179 | orchestrator | 2025-05-04 01:03:22 | INFO  | Task 96e1c7e0-d7cb-4107-b1ca-1c982f7928df is in state SUCCESS 2025-05-04 01:03:22.181863 | orchestrator | 2025-05-04 01:03:22.181953 | orchestrator | 2025-05-04 01:03:22.181970 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:03:22.181985 | orchestrator | 2025-05-04 01:03:22.182000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:03:22.182084 | orchestrator | Sunday 04 May 2025 01:02:08 +0000 (0:00:00.212) 0:00:00.212 ************ 2025-05-04 01:03:22.182119 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:03:22.182147 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:03:22.182164 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:03:22.182179 | orchestrator | 2025-05-04 01:03:22.182193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:03:22.182208 | orchestrator | Sunday 04 May 2025 01:02:09 +0000 (0:00:00.341) 0:00:00.554 ************ 2025-05-04 01:03:22.182254 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-04 01:03:22.182270 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-04 01:03:22.182284 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-04 01:03:22.182298 | orchestrator | 2025-05-04 01:03:22.182313 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-04 01:03:22.182327 | orchestrator | 2025-05-04 01:03:22.182341 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-04 01:03:22.182355 | orchestrator | Sunday 04 May 2025 01:02:09 +0000 (0:00:00.390) 0:00:00.945 ************ 2025-05-04 01:03:22.182371 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:03:22.182411 | orchestrator | 2025-05-04 01:03:22.182427 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-04 01:03:22.182441 | orchestrator | Sunday 04 May 2025 01:02:10 +0000 (0:00:01.185) 0:00:02.130 ************ 2025-05-04 01:03:22.182458 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-04 01:03:22.182473 | orchestrator | 2025-05-04 01:03:22.182491 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-04 01:03:22.182507 | orchestrator | Sunday 04 May 2025 01:02:14 +0000 (0:00:03.305) 0:00:05.435 ************ 2025-05-04 01:03:22.182523 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-04 01:03:22.182557 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-04 01:03:22.182573 | orchestrator | 2025-05-04 01:03:22.182589 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-04 01:03:22.182605 | orchestrator | Sunday 04 May 2025 01:02:20 +0000 (0:00:06.255) 0:00:11.691 ************ 2025-05-04 01:03:22.182622 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:03:22.182639 | orchestrator | 2025-05-04 01:03:22.182655 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-04 01:03:22.182671 | orchestrator | Sunday 04 May 2025 01:02:23 +0000 (0:00:03.507) 0:00:15.199 ************ 2025-05-04 01:03:22.182687 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:03:22.182703 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-04 01:03:22.182719 | orchestrator | 2025-05-04 01:03:22.182735 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-04 01:03:22.182751 | orchestrator | Sunday 04 May 2025 01:02:27 +0000 (0:00:03.737) 0:00:18.936 ************ 2025-05-04 01:03:22.182768 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:03:22.182784 | orchestrator | 2025-05-04 01:03:22.182801 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-04 01:03:22.182815 | orchestrator | Sunday 04 May 2025 01:02:30 +0000 (0:00:03.203) 0:00:22.140 ************ 2025-05-04 01:03:22.182829 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-04 01:03:22.182843 | orchestrator | 2025-05-04 01:03:22.182856 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-04 01:03:22.182871 | orchestrator | Sunday 04 May 2025 01:02:35 +0000 (0:00:04.373) 0:00:26.513 ************ 2025-05-04 01:03:22.182885 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:03:22.182962 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:03:22.182986 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:03:22.183002 | orchestrator | 2025-05-04 01:03:22.183021 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-04 01:03:22.183036 | orchestrator | Sunday 04 May 2025 01:02:35 +0000 (0:00:00.699) 0:00:27.213 ************ 2025-05-04 01:03:22.183053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.183243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.183317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.183336 | orchestrator | 2025-05-04 01:03:22.183351 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-04 01:03:22.183366 | orchestrator | Sunday 04 May 2025 01:02:37 +0000 (0:00:01.690) 0:00:28.903 ************ 2025-05-04 01:03:22.183380 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:03:22.183395 | orchestrator | 2025-05-04 01:03:22.183409 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-04 01:03:22.183423 | orchestrator | Sunday 04 May 2025 01:02:37 +0000 (0:00:00.128) 0:00:29.032 ************ 2025-05-04 01:03:22.183437 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:03:22.183451 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:03:22.183465 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:03:22.183479 | orchestrator | 2025-05-04 01:03:22.183493 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-04 01:03:22.183508 | orchestrator | Sunday 04 May 2025 01:02:38 +0000 (0:00:00.339) 0:00:29.371 ************ 2025-05-04 01:03:22.183522 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:03:22.183537 | orchestrator | 2025-05-04 01:03:22.183551 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-04 01:03:22.183565 | orchestrator | Sunday 04 May 2025 01:02:38 +0000 (0:00:00.526) 0:00:29.898 ************ 2025-05-04 01:03:22.183580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.183619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.183635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.183651 | orchestrator | 2025-05-04 01:03:22.183665 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-04 01:03:22.183679 | orchestrator | Sunday 04 May 2025 01:02:40 +0000 (0:00:01.473) 0:00:31.372 ************ 2025-05-04 01:03:22.183705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.183720 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:03:22.183735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.183757 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:03:22.183801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.183821 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:03:22.183838 | orchestrator | 2025-05-04 01:03:22.183854 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-04 01:03:22.183870 | orchestrator | Sunday 04 May 2025 01:02:40 +0000 (0:00:00.532) 0:00:31.904 ************ 2025-05-04 01:03:22.183907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.183925 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:03:22.183953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.183979 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:03:22.183996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.184013 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:03:22.184030 | orchestrator | 2025-05-04 01:03:22.184046 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-04 01:03:22.184062 | orchestrator | Sunday 04 May 2025 01:02:41 +0000 (0:00:00.793) 0:00:32.698 ************ 2025-05-04 01:03:22.184087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184164 | orchestrator | 2025-05-04 01:03:22.184180 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-04 01:03:22.184194 | orchestrator | Sunday 04 May 2025 01:02:42 +0000 (0:00:01.486) 0:00:34.184 ************ 2025-05-04 01:03:22.184209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184264 | orchestrator | 2025-05-04 01:03:22.184278 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-04 01:03:22.184292 | orchestrator | Sunday 04 May 2025 01:02:45 +0000 (0:00:02.553) 0:00:36.737 ************ 2025-05-04 01:03:22.184307 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-04 01:03:22.184322 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-04 01:03:22.184336 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-04 01:03:22.184350 | orchestrator | 2025-05-04 01:03:22.184365 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-04 01:03:22.184379 | orchestrator | Sunday 04 May 2025 01:02:47 +0000 (0:00:02.167) 0:00:38.904 ************ 2025-05-04 01:03:22.184400 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:03:22.184414 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:03:22.184428 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:03:22.184443 | orchestrator | 2025-05-04 01:03:22.184457 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-04 01:03:22.184472 | orchestrator | Sunday 04 May 2025 01:02:49 +0000 (0:00:01.606) 0:00:40.511 ************ 2025-05-04 01:03:22.184499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.184515 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:03:22.184529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.184544 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:03:22.184584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-04 01:03:22.184600 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:03:22.184615 | orchestrator | 2025-05-04 01:03:22.184629 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-04 01:03:22.184644 | orchestrator | Sunday 04 May 2025 01:02:49 +0000 (0:00:00.676) 0:00:41.187 ************ 2025-05-04 01:03:22.184658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-04 01:03:22.184729 | orchestrator | 2025-05-04 01:03:22.184743 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-04 01:03:22.184757 | orchestrator | Sunday 04 May 2025 01:02:51 +0000 (0:00:01.162) 0:00:42.349 ************ 2025-05-04 01:03:22.184772 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:03:22.184786 | orchestrator | 2025-05-04 01:03:22.184799 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-04 01:03:22.184814 | orchestrator | Sunday 04 May 2025 01:02:53 +0000 (0:00:02.283) 0:00:44.632 ************ 2025-05-04 01:03:22.184828 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:03:22.184842 | orchestrator | 2025-05-04 01:03:22.184856 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-04 01:03:22.184870 | orchestrator | Sunday 04 May 2025 01:02:55 +0000 (0:00:02.458) 0:00:47.091 ************ 2025-05-04 01:03:22.184935 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:03:22.186226 | orchestrator | 2025-05-04 01:03:22.186266 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-04 01:03:22.186288 | orchestrator | Sunday 04 May 2025 01:03:08 +0000 (0:00:12.364) 0:00:59.455 ************ 2025-05-04 01:03:22.186309 | orchestrator | 2025-05-04 01:03:22.186331 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-04 01:03:22.186349 | orchestrator | Sunday 04 May 2025 01:03:08 +0000 (0:00:00.074) 0:00:59.530 ************ 2025-05-04 01:03:22.186361 | orchestrator | 2025-05-04 01:03:22.186374 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-04 01:03:22.186398 | orchestrator | Sunday 04 May 2025 01:03:08 +0000 (0:00:00.210) 0:00:59.741 ************ 2025-05-04 01:03:22.186411 | orchestrator | 2025-05-04 01:03:22.186424 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-04 01:03:22.186437 | orchestrator | Sunday 04 May 2025 01:03:08 +0000 (0:00:00.073) 0:00:59.815 ************ 2025-05-04 01:03:22.186449 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:03:22.186462 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:03:22.186475 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:03:22.186487 | orchestrator | 2025-05-04 01:03:22.186500 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:03:22.186535 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-04 01:03:22.186550 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 01:03:22.186563 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-04 01:03:22.186576 | orchestrator | 2025-05-04 01:03:22.186589 | orchestrator | 2025-05-04 01:03:22.186601 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:03:22.186614 | orchestrator | Sunday 04 May 2025 01:03:18 +0000 (0:00:10.293) 0:01:10.109 ************ 2025-05-04 01:03:22.186627 | orchestrator | =============================================================================== 2025-05-04 01:03:22.186640 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.36s 2025-05-04 01:03:22.186652 | orchestrator | placement : Restart placement-api container ---------------------------- 10.29s 2025-05-04 01:03:22.186665 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.26s 2025-05-04 01:03:22.186677 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.37s 2025-05-04 01:03:22.186690 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.74s 2025-05-04 01:03:22.186702 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.51s 2025-05-04 01:03:22.186715 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.31s 2025-05-04 01:03:22.186727 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-05-04 01:03:22.186740 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.55s 2025-05-04 01:03:22.186753 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.46s 2025-05-04 01:03:22.186765 | orchestrator | placement : Creating placement databases -------------------------------- 2.28s 2025-05-04 01:03:22.186778 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.17s 2025-05-04 01:03:22.186790 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.69s 2025-05-04 01:03:22.186803 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.61s 2025-05-04 01:03:22.186815 | orchestrator | placement : Copying over config.json files for services ----------------- 1.49s 2025-05-04 01:03:22.186828 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.47s 2025-05-04 01:03:22.186841 | orchestrator | placement : include_tasks ----------------------------------------------- 1.18s 2025-05-04 01:03:22.186856 | orchestrator | placement : Check placement containers ---------------------------------- 1.16s 2025-05-04 01:03:22.186872 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.79s 2025-05-04 01:03:22.186903 | orchestrator | placement : include_tasks ----------------------------------------------- 0.70s 2025-05-04 01:03:22.186919 | orchestrator | 2025-05-04 01:03:22 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:22.186933 | orchestrator | 2025-05-04 01:03:22 | INFO  | Task 84f3683f-94fb-4eea-bc05-8f9877ef55af is in state STARTED 2025-05-04 01:03:22.186969 | orchestrator | 2025-05-04 01:03:22 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:22.188455 | orchestrator | 2025-05-04 01:03:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:22.188686 | orchestrator | 2025-05-04 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:25.241236 | orchestrator | 2025-05-04 01:03:25 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:25.242710 | orchestrator | 2025-05-04 01:03:25 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:25.244146 | orchestrator | 2025-05-04 01:03:25 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:25.245150 | orchestrator | 2025-05-04 01:03:25 | INFO  | Task 84f3683f-94fb-4eea-bc05-8f9877ef55af is in state SUCCESS 2025-05-04 01:03:25.246488 | orchestrator | 2025-05-04 01:03:25 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:25.247544 | orchestrator | 2025-05-04 01:03:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:28.305368 | orchestrator | 2025-05-04 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:28.305511 | orchestrator | 2025-05-04 01:03:28 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:28.307170 | orchestrator | 2025-05-04 01:03:28 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:28.307215 | orchestrator | 2025-05-04 01:03:28 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:28.308719 | orchestrator | 2025-05-04 01:03:28 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:31.342612 | orchestrator | 2025-05-04 01:03:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:31.342845 | orchestrator | 2025-05-04 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:31.342934 | orchestrator | 2025-05-04 01:03:31 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:31.343239 | orchestrator | 2025-05-04 01:03:31 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:31.343271 | orchestrator | 2025-05-04 01:03:31 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:31.343877 | orchestrator | 2025-05-04 01:03:31 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:31.344609 | orchestrator | 2025-05-04 01:03:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:34.374331 | orchestrator | 2025-05-04 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:34.374455 | orchestrator | 2025-05-04 01:03:34 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:34.374814 | orchestrator | 2025-05-04 01:03:34 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:34.376246 | orchestrator | 2025-05-04 01:03:34 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:34.377766 | orchestrator | 2025-05-04 01:03:34 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:34.378100 | orchestrator | 2025-05-04 01:03:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:37.442289 | orchestrator | 2025-05-04 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:37.442418 | orchestrator | 2025-05-04 01:03:37 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:37.442632 | orchestrator | 2025-05-04 01:03:37 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:37.443410 | orchestrator | 2025-05-04 01:03:37 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:37.444263 | orchestrator | 2025-05-04 01:03:37 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:37.444910 | orchestrator | 2025-05-04 01:03:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:37.445109 | orchestrator | 2025-05-04 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:40.485970 | orchestrator | 2025-05-04 01:03:40 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:40.490652 | orchestrator | 2025-05-04 01:03:40 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:40.490982 | orchestrator | 2025-05-04 01:03:40 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:40.494299 | orchestrator | 2025-05-04 01:03:40 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:43.541185 | orchestrator | 2025-05-04 01:03:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:43.541264 | orchestrator | 2025-05-04 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:43.541286 | orchestrator | 2025-05-04 01:03:43 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:43.542440 | orchestrator | 2025-05-04 01:03:43 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:43.545301 | orchestrator | 2025-05-04 01:03:43 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:43.546559 | orchestrator | 2025-05-04 01:03:43 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:43.547215 | orchestrator | 2025-05-04 01:03:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:46.581859 | orchestrator | 2025-05-04 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:46.582151 | orchestrator | 2025-05-04 01:03:46 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:46.585060 | orchestrator | 2025-05-04 01:03:46 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:46.585599 | orchestrator | 2025-05-04 01:03:46 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:46.586444 | orchestrator | 2025-05-04 01:03:46 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:46.587822 | orchestrator | 2025-05-04 01:03:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:49.630609 | orchestrator | 2025-05-04 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:49.630774 | orchestrator | 2025-05-04 01:03:49 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:49.631187 | orchestrator | 2025-05-04 01:03:49 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:49.631410 | orchestrator | 2025-05-04 01:03:49 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:49.632313 | orchestrator | 2025-05-04 01:03:49 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:49.632867 | orchestrator | 2025-05-04 01:03:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:52.662397 | orchestrator | 2025-05-04 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:52.662520 | orchestrator | 2025-05-04 01:03:52 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:52.664118 | orchestrator | 2025-05-04 01:03:52 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:52.664154 | orchestrator | 2025-05-04 01:03:52 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:52.664169 | orchestrator | 2025-05-04 01:03:52 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:52.664192 | orchestrator | 2025-05-04 01:03:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:55.703407 | orchestrator | 2025-05-04 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:55.703544 | orchestrator | 2025-05-04 01:03:55 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:55.704404 | orchestrator | 2025-05-04 01:03:55 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:55.704469 | orchestrator | 2025-05-04 01:03:55 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:55.704495 | orchestrator | 2025-05-04 01:03:55 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:55.705434 | orchestrator | 2025-05-04 01:03:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:03:55.705601 | orchestrator | 2025-05-04 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:03:58.750996 | orchestrator | 2025-05-04 01:03:58 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:03:58.751207 | orchestrator | 2025-05-04 01:03:58 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:03:58.751244 | orchestrator | 2025-05-04 01:03:58 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:03:58.751796 | orchestrator | 2025-05-04 01:03:58 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:03:58.752322 | orchestrator | 2025-05-04 01:03:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:01.793079 | orchestrator | 2025-05-04 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:01.793223 | orchestrator | 2025-05-04 01:04:01 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:01.795298 | orchestrator | 2025-05-04 01:04:01 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:01.795335 | orchestrator | 2025-05-04 01:04:01 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:04:01.795962 | orchestrator | 2025-05-04 01:04:01 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:01.796505 | orchestrator | 2025-05-04 01:04:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:04.831436 | orchestrator | 2025-05-04 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:04.831556 | orchestrator | 2025-05-04 01:04:04 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:04.832943 | orchestrator | 2025-05-04 01:04:04 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:04.832974 | orchestrator | 2025-05-04 01:04:04 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:04:04.833028 | orchestrator | 2025-05-04 01:04:04 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:04.833564 | orchestrator | 2025-05-04 01:04:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:07.868572 | orchestrator | 2025-05-04 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:07.868794 | orchestrator | 2025-05-04 01:04:07 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:07.869674 | orchestrator | 2025-05-04 01:04:07 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:07.869722 | orchestrator | 2025-05-04 01:04:07 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:04:07.870083 | orchestrator | 2025-05-04 01:04:07 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:07.870795 | orchestrator | 2025-05-04 01:04:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:07.871081 | orchestrator | 2025-05-04 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:10.898312 | orchestrator | 2025-05-04 01:04:10 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:10.901978 | orchestrator | 2025-05-04 01:04:10 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:10.904514 | orchestrator | 2025-05-04 01:04:10 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:04:10.904569 | orchestrator | 2025-05-04 01:04:10 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:10.904606 | orchestrator | 2025-05-04 01:04:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:13.933506 | orchestrator | 2025-05-04 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:13.933637 | orchestrator | 2025-05-04 01:04:13 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:13.934194 | orchestrator | 2025-05-04 01:04:13 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:13.934566 | orchestrator | 2025-05-04 01:04:13 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:04:13.935459 | orchestrator | 2025-05-04 01:04:13 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:13.936036 | orchestrator | 2025-05-04 01:04:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:13.936181 | orchestrator | 2025-05-04 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:16.980288 | orchestrator | 2025-05-04 01:04:16 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:16.982284 | orchestrator | 2025-05-04 01:04:16 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:16.983692 | orchestrator | 2025-05-04 01:04:16 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state STARTED 2025-05-04 01:04:16.986782 | orchestrator | 2025-05-04 01:04:16 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:20.040094 | orchestrator | 2025-05-04 01:04:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:20.040213 | orchestrator | 2025-05-04 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:20.040249 | orchestrator | 2025-05-04 01:04:20 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:20.041234 | orchestrator | 2025-05-04 01:04:20 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:20.042566 | orchestrator | 2025-05-04 01:04:20 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:20.053832 | orchestrator | 2025-05-04 01:04:20 | INFO  | Task 94580cba-489f-45d0-a969-940c8aee4bd5 is in state SUCCESS 2025-05-04 01:04:20.055541 | orchestrator | 2025-05-04 01:04:20.055581 | orchestrator | 2025-05-04 01:04:20.055597 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:04:20.055613 | orchestrator | 2025-05-04 01:04:20.055628 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:04:20.055643 | orchestrator | Sunday 04 May 2025 01:03:21 +0000 (0:00:00.225) 0:00:00.225 ************ 2025-05-04 01:04:20.055657 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:20.055673 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:20.055687 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:20.055701 | orchestrator | 2025-05-04 01:04:20.055736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:04:20.055752 | orchestrator | Sunday 04 May 2025 01:03:22 +0000 (0:00:00.431) 0:00:00.657 ************ 2025-05-04 01:04:20.055767 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-04 01:04:20.055781 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-04 01:04:20.055796 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-04 01:04:20.055960 | orchestrator | 2025-05-04 01:04:20.056594 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-04 01:04:20.056615 | orchestrator | 2025-05-04 01:04:20.057032 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-04 01:04:20.057057 | orchestrator | Sunday 04 May 2025 01:03:22 +0000 (0:00:00.529) 0:00:01.186 ************ 2025-05-04 01:04:20.057072 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:20.057087 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:20.057101 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:20.057115 | orchestrator | 2025-05-04 01:04:20.057130 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:04:20.057158 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:04:20.057175 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:04:20.057189 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:04:20.057204 | orchestrator | 2025-05-04 01:04:20.057218 | orchestrator | 2025-05-04 01:04:20.057233 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:04:20.057247 | orchestrator | Sunday 04 May 2025 01:03:23 +0000 (0:00:00.819) 0:00:02.005 ************ 2025-05-04 01:04:20.057262 | orchestrator | =============================================================================== 2025-05-04 01:04:20.057276 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.82s 2025-05-04 01:04:20.057290 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-05-04 01:04:20.057305 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-05-04 01:04:20.057319 | orchestrator | 2025-05-04 01:04:20.057333 | orchestrator | 2025-05-04 01:04:20.057347 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:04:20.057361 | orchestrator | 2025-05-04 01:04:20.057376 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:04:20.057390 | orchestrator | Sunday 04 May 2025 00:59:39 +0000 (0:00:00.243) 0:00:00.243 ************ 2025-05-04 01:04:20.057404 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:20.057419 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:20.057467 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:20.057483 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:04:20.057497 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:04:20.057523 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:04:20.057539 | orchestrator | 2025-05-04 01:04:20.057579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:04:20.057596 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.899) 0:00:01.142 ************ 2025-05-04 01:04:20.057614 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-04 01:04:20.057630 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-04 01:04:20.057647 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-04 01:04:20.057664 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-04 01:04:20.057680 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-04 01:04:20.057697 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-04 01:04:20.057712 | orchestrator | 2025-05-04 01:04:20.057729 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-04 01:04:20.057745 | orchestrator | 2025-05-04 01:04:20.057762 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-04 01:04:20.057778 | orchestrator | Sunday 04 May 2025 00:59:41 +0000 (0:00:00.973) 0:00:02.115 ************ 2025-05-04 01:04:20.057795 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:04:20.057813 | orchestrator | 2025-05-04 01:04:20.057830 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-04 01:04:20.057846 | orchestrator | Sunday 04 May 2025 00:59:42 +0000 (0:00:01.165) 0:00:03.281 ************ 2025-05-04 01:04:20.057862 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:20.057879 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:20.057934 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:20.057951 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:04:20.057965 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:04:20.057979 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:04:20.057993 | orchestrator | 2025-05-04 01:04:20.058008 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-04 01:04:20.058070 | orchestrator | Sunday 04 May 2025 00:59:44 +0000 (0:00:01.262) 0:00:04.544 ************ 2025-05-04 01:04:20.058086 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:20.058100 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:20.058114 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:20.058128 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:04:20.058142 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:04:20.058168 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:04:20.058182 | orchestrator | 2025-05-04 01:04:20.058197 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-04 01:04:20.058211 | orchestrator | Sunday 04 May 2025 00:59:45 +0000 (0:00:01.174) 0:00:05.719 ************ 2025-05-04 01:04:20.058226 | orchestrator | ok: [testbed-node-0] => { 2025-05-04 01:04:20.058240 | orchestrator |  "changed": false, 2025-05-04 01:04:20.058267 | orchestrator |  "msg": "All assertions passed" 2025-05-04 01:04:20.058282 | orchestrator | } 2025-05-04 01:04:20.058296 | orchestrator | ok: [testbed-node-1] => { 2025-05-04 01:04:20.058310 | orchestrator |  "changed": false, 2025-05-04 01:04:20.058325 | orchestrator |  "msg": "All assertions passed" 2025-05-04 01:04:20.058339 | orchestrator | } 2025-05-04 01:04:20.058353 | orchestrator | ok: [testbed-node-2] => { 2025-05-04 01:04:20.058367 | orchestrator |  "changed": false, 2025-05-04 01:04:20.058381 | orchestrator |  "msg": "All assertions passed" 2025-05-04 01:04:20.058395 | orchestrator | } 2025-05-04 01:04:20.058410 | orchestrator | ok: [testbed-node-3] => { 2025-05-04 01:04:20.058424 | orchestrator |  "changed": false, 2025-05-04 01:04:20.058438 | orchestrator |  "msg": "All assertions passed" 2025-05-04 01:04:20.058452 | orchestrator | } 2025-05-04 01:04:20.058475 | orchestrator | ok: [testbed-node-4] => { 2025-05-04 01:04:20.058489 | orchestrator |  "changed": false, 2025-05-04 01:04:20.058503 | orchestrator |  "msg": "All assertions passed" 2025-05-04 01:04:20.058517 | orchestrator | } 2025-05-04 01:04:20.058531 | orchestrator | ok: [testbed-node-5] => { 2025-05-04 01:04:20.058546 | orchestrator |  "changed": false, 2025-05-04 01:04:20.058559 | orchestrator |  "msg": "All assertions passed" 2025-05-04 01:04:20.058573 | orchestrator | } 2025-05-04 01:04:20.058587 | orchestrator | 2025-05-04 01:04:20.058602 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-04 01:04:20.058616 | orchestrator | Sunday 04 May 2025 00:59:46 +0000 (0:00:00.819) 0:00:06.538 ************ 2025-05-04 01:04:20.058630 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.058644 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.058658 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.058673 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.058686 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.058700 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.058714 | orchestrator | 2025-05-04 01:04:20.058729 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-04 01:04:20.058748 | orchestrator | Sunday 04 May 2025 00:59:47 +0000 (0:00:00.985) 0:00:07.523 ************ 2025-05-04 01:04:20.058763 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-04 01:04:20.058777 | orchestrator | 2025-05-04 01:04:20.058792 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-04 01:04:20.058805 | orchestrator | Sunday 04 May 2025 00:59:50 +0000 (0:00:03.092) 0:00:10.616 ************ 2025-05-04 01:04:20.058820 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-04 01:04:20.058835 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-04 01:04:20.058849 | orchestrator | 2025-05-04 01:04:20.058863 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-04 01:04:20.058877 | orchestrator | Sunday 04 May 2025 00:59:56 +0000 (0:00:06.485) 0:00:17.102 ************ 2025-05-04 01:04:20.059259 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:04:20.059281 | orchestrator | 2025-05-04 01:04:20.059294 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-04 01:04:20.059307 | orchestrator | Sunday 04 May 2025 01:00:00 +0000 (0:00:03.336) 0:00:20.438 ************ 2025-05-04 01:04:20.059319 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:04:20.059332 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-04 01:04:20.059345 | orchestrator | 2025-05-04 01:04:20.059357 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-04 01:04:20.059370 | orchestrator | Sunday 04 May 2025 01:00:03 +0000 (0:00:03.718) 0:00:24.157 ************ 2025-05-04 01:04:20.059383 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:04:20.059395 | orchestrator | 2025-05-04 01:04:20.059408 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-04 01:04:20.059421 | orchestrator | Sunday 04 May 2025 01:00:06 +0000 (0:00:03.209) 0:00:27.367 ************ 2025-05-04 01:04:20.059433 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-04 01:04:20.059446 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-04 01:04:20.059458 | orchestrator | 2025-05-04 01:04:20.059471 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-04 01:04:20.059484 | orchestrator | Sunday 04 May 2025 01:00:15 +0000 (0:00:08.280) 0:00:35.647 ************ 2025-05-04 01:04:20.059496 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.059509 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.059522 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.059534 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.059557 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.059570 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.059582 | orchestrator | 2025-05-04 01:04:20.059595 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-04 01:04:20.059608 | orchestrator | Sunday 04 May 2025 01:00:16 +0000 (0:00:00.800) 0:00:36.448 ************ 2025-05-04 01:04:20.059620 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.059633 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.059680 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.059694 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.059706 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.059778 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.059792 | orchestrator | 2025-05-04 01:04:20.059804 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-04 01:04:20.059840 | orchestrator | Sunday 04 May 2025 01:00:19 +0000 (0:00:03.175) 0:00:39.624 ************ 2025-05-04 01:04:20.059853 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:20.059955 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:20.059975 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:20.059989 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:04:20.060001 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:04:20.060025 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:04:20.060038 | orchestrator | 2025-05-04 01:04:20.060051 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-04 01:04:20.060064 | orchestrator | Sunday 04 May 2025 01:00:20 +0000 (0:00:00.963) 0:00:40.588 ************ 2025-05-04 01:04:20.060077 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.060089 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.060148 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.060162 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.060175 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.060187 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.060199 | orchestrator | 2025-05-04 01:04:20.060212 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-04 01:04:20.060225 | orchestrator | Sunday 04 May 2025 01:00:23 +0000 (0:00:02.874) 0:00:43.463 ************ 2025-05-04 01:04:20.060240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.060281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.060343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.060702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.060786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.060803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.060873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.060913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.060956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.060985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.061077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.061100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.061204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.061246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.061310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.061331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.061387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.061459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.061555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.061639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.061732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.061768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.061783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.061831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.061853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.061906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.061991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.062063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.062163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.062242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.062330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.062370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.062431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.062446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.062492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.062530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.062586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.062607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.062643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.062684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.062700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.062745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.062760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.062775 | orchestrator | 2025-05-04 01:04:20.062789 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-04 01:04:20.062804 | orchestrator | Sunday 04 May 2025 01:00:26 +0000 (0:00:03.710) 0:00:47.173 ************ 2025-05-04 01:04:20.062819 | orchestrator | [WARNING]: Skipped 2025-05-04 01:04:20.062833 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-04 01:04:20.062848 | orchestrator | due to this access issue: 2025-05-04 01:04:20.062862 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-04 01:04:20.062876 | orchestrator | a directory 2025-05-04 01:04:20.063001 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:04:20.063038 | orchestrator | 2025-05-04 01:04:20.063084 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-04 01:04:20.063100 | orchestrator | Sunday 04 May 2025 01:00:27 +0000 (0:00:00.919) 0:00:48.093 ************ 2025-05-04 01:04:20.063115 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:04:20.063130 | orchestrator | 2025-05-04 01:04:20.063173 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-04 01:04:20.063189 | orchestrator | Sunday 04 May 2025 01:00:28 +0000 (0:00:01.289) 0:00:49.383 ************ 2025-05-04 01:04:20.063204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.063250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.063267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.063282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.063298 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.063312 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.063337 | orchestrator | 2025-05-04 01:04:20.063352 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-04 01:04:20.063366 | orchestrator | Sunday 04 May 2025 01:00:33 +0000 (0:00:04.950) 0:00:54.334 ************ 2025-05-04 01:04:20.063404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.063420 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.063435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.063450 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.063464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.063478 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.063495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.063514 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.063537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.063551 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.063589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.063604 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.063617 | orchestrator | 2025-05-04 01:04:20.063630 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-04 01:04:20.063643 | orchestrator | Sunday 04 May 2025 01:00:38 +0000 (0:00:04.317) 0:00:58.651 ************ 2025-05-04 01:04:20.063655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.063684 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.063698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.063718 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.063742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.063756 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.063775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.063789 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.063802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.063816 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.063829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.063842 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.063854 | orchestrator | 2025-05-04 01:04:20.063867 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-04 01:04:20.063904 | orchestrator | Sunday 04 May 2025 01:00:42 +0000 (0:00:04.097) 0:01:02.749 ************ 2025-05-04 01:04:20.063937 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.063952 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.063965 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.063978 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.063990 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.064003 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.064015 | orchestrator | 2025-05-04 01:04:20.064114 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-04 01:04:20.064130 | orchestrator | Sunday 04 May 2025 01:00:45 +0000 (0:00:03.311) 0:01:06.061 ************ 2025-05-04 01:04:20.064143 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.064156 | orchestrator | 2025-05-04 01:04:20.064169 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-04 01:04:20.064182 | orchestrator | Sunday 04 May 2025 01:00:45 +0000 (0:00:00.118) 0:01:06.179 ************ 2025-05-04 01:04:20.064194 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.064207 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.064219 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.064231 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.064244 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.064257 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.064269 | orchestrator | 2025-05-04 01:04:20.064282 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-04 01:04:20.064294 | orchestrator | Sunday 04 May 2025 01:00:46 +0000 (0:00:00.644) 0:01:06.824 ************ 2025-05-04 01:04:20.064320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.064343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.064403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.064436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.064461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.064494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.064520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.064539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.064605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.064620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064633 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.064646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.064678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.064739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.064765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.064793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.064827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.064854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.064867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.064930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.064965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.064986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065000 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.065013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.065035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.065103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.065191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.065218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.065277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.065291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065304 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.065326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.065340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.065407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.065499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.065525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.065588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.065602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065615 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.065628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.065651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.065717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.065771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.065790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.065810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.066355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.066388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.066425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.066450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066465 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.066489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.066504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.066591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.066642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.066668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.066716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.066809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.066823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.066953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.066969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.066990 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.067006 | orchestrator | 2025-05-04 01:04:20.067021 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-04 01:04:20.067035 | orchestrator | Sunday 04 May 2025 01:00:49 +0000 (0:00:02.619) 0:01:09.443 ************ 2025-05-04 01:04:20.067051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.067076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.067135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.067191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.067275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.067341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.067373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.067440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.067497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.067513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.067561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.067574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.067593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.067616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.067695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.067790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.067828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.067913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.067930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.067964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.067978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.068098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.068317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.068415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.068444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.068519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.068533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.068722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.068760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.068816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.068913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.068957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.068992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.069018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.069114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.069140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.069202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.069215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.069242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.069309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.069350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.069363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.069412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.069439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.069512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.069527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069540 | orchestrator | 2025-05-04 01:04:20.069553 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-04 01:04:20.069566 | orchestrator | Sunday 04 May 2025 01:00:53 +0000 (0:00:04.459) 0:01:13.903 ************ 2025-05-04 01:04:20.069580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.069593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.069669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.069761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.069831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.069892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.069920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.069970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.070571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.070599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.070613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.070637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.070719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.071264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.071414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.071443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.071455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.071847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.071874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.072351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.072378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.072400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.072816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.072828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.072846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.072975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.072986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.072998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.073077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.073101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.073165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.073193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.073213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.073619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.073683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.073712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.073780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.073813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.073835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.073846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.073966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.073986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.074005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.074066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.074174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.074192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.074236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.074249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.074334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.074364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.074385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.074468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.074483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.074517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.074538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.074548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.074633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.074644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074653 | orchestrator | 2025-05-04 01:04:20.074664 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-04 01:04:20.074674 | orchestrator | Sunday 04 May 2025 01:01:02 +0000 (0:00:09.133) 0:01:23.036 ************ 2025-05-04 01:04:20.074684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.074695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.074799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.074827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.074879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.074921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.074940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.074950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.074959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.075066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.075078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075088 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.075098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.075108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.075211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.075231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.075240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.075332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.075351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.075360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.075427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.075440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075449 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.075466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.075476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.075564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.075603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.075613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.075690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.075718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.075727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.075803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.075818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075829 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.075847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.075858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.075976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.075995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.076082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.076105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.076159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.076214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.076245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.076320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.076371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.076490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.076566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.076633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.076712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.076757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.076766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.076776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.076860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.076870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.076932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.076941 | orchestrator | 2025-05-04 01:04:20.076951 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-04 01:04:20.076960 | orchestrator | Sunday 04 May 2025 01:01:06 +0000 (0:00:04.228) 0:01:27.265 ************ 2025-05-04 01:04:20.076969 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.076978 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:04:20.076987 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:20.076995 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.077004 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.077013 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:04:20.077022 | orchestrator | 2025-05-04 01:04:20.077031 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-04 01:04:20.077040 | orchestrator | Sunday 04 May 2025 01:01:12 +0000 (0:00:05.599) 0:01:32.864 ************ 2025-05-04 01:04:20.077110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.077134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.077229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.077295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.077354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.077398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.077407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.077454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077466 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.077483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.077539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.077648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.077666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.077765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.077773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077782 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.077791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.077846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.077903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.077912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.077986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.078032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.078051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.078136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.078145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078153 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.078162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.078210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.078270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.078370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.078387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.078474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.078492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.078501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.078599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.078698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.078715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.078804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.078817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.078836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.078977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.078986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.078999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.079014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.079069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.079084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.079092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.079153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.079164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079171 | orchestrator | 2025-05-04 01:04:20.079179 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-04 01:04:20.079205 | orchestrator | Sunday 04 May 2025 01:01:16 +0000 (0:00:04.208) 0:01:37.073 ************ 2025-05-04 01:04:20.079213 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079221 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079228 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079235 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079242 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079249 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079257 | orchestrator | 2025-05-04 01:04:20.079264 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-04 01:04:20.079271 | orchestrator | Sunday 04 May 2025 01:01:19 +0000 (0:00:02.625) 0:01:39.698 ************ 2025-05-04 01:04:20.079279 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079286 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079293 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079304 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079312 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079324 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079331 | orchestrator | 2025-05-04 01:04:20.079339 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-04 01:04:20.079346 | orchestrator | Sunday 04 May 2025 01:01:21 +0000 (0:00:02.674) 0:01:42.373 ************ 2025-05-04 01:04:20.079353 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079360 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079367 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079374 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079381 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079388 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079396 | orchestrator | 2025-05-04 01:04:20.079403 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-04 01:04:20.079416 | orchestrator | Sunday 04 May 2025 01:01:24 +0000 (0:00:02.409) 0:01:44.782 ************ 2025-05-04 01:04:20.079424 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079431 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079438 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079445 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079452 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079459 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079466 | orchestrator | 2025-05-04 01:04:20.079474 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-04 01:04:20.079481 | orchestrator | Sunday 04 May 2025 01:01:26 +0000 (0:00:02.028) 0:01:46.811 ************ 2025-05-04 01:04:20.079488 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079495 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079502 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079509 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079516 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079523 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079530 | orchestrator | 2025-05-04 01:04:20.079537 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-04 01:04:20.079544 | orchestrator | Sunday 04 May 2025 01:01:28 +0000 (0:00:01.901) 0:01:48.713 ************ 2025-05-04 01:04:20.079551 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079558 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079565 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079572 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079580 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079587 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079594 | orchestrator | 2025-05-04 01:04:20.079604 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-04 01:04:20.079611 | orchestrator | Sunday 04 May 2025 01:01:30 +0000 (0:00:02.009) 0:01:50.722 ************ 2025-05-04 01:04:20.079619 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-04 01:04:20.079626 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.079633 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-04 01:04:20.079640 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.079647 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-04 01:04:20.079655 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.079664 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-04 01:04:20.079671 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.079679 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-04 01:04:20.079686 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.079693 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-04 01:04:20.079755 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.079768 | orchestrator | 2025-05-04 01:04:20.079775 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-04 01:04:20.079783 | orchestrator | Sunday 04 May 2025 01:01:33 +0000 (0:00:03.083) 0:01:53.806 ************ 2025-05-04 01:04:20.079791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.079799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.079894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.079910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.079925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.079941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.079987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.079998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.080028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080047 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.080090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.080100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.080188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.080293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.080327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080349 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.080401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.080413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.080455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.080565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.080635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080654 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.080661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.080705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.080759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.080864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.080872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.080947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.080956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.080967 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.080975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.080982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.081073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.081158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.081178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.081245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.081253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081273 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.081281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.081289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.081370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.081464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.081483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.081549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.081560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081571 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.081579 | orchestrator | 2025-05-04 01:04:20.081594 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-04 01:04:20.081602 | orchestrator | Sunday 04 May 2025 01:01:35 +0000 (0:00:02.327) 0:01:56.133 ************ 2025-05-04 01:04:20.081610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.081617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.081699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.081784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.081812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.081820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.081871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.081899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081907 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.081921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.081929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.081996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.082004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.082126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.082157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.082180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.082226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082236 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.082250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.082258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.082330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.082408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.082437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.082461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.082471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082514 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.082531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.082539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.082610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.082669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.082723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.082746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.082769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082777 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.082828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.082840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.082950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.082984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.082991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.083002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.083048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.083063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.083077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.083087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083094 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.083136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.083169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.083202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.083209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.083227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.083241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.083265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.083280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.083293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.083299 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083313 | orchestrator | 2025-05-04 01:04:20.083320 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-04 01:04:20.083326 | orchestrator | Sunday 04 May 2025 01:01:38 +0000 (0:00:02.340) 0:01:58.473 ************ 2025-05-04 01:04:20.083333 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083339 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083346 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083352 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083358 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083365 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083371 | orchestrator | 2025-05-04 01:04:20.083378 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-04 01:04:20.083384 | orchestrator | Sunday 04 May 2025 01:01:41 +0000 (0:00:02.959) 0:02:01.433 ************ 2025-05-04 01:04:20.083391 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083397 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083404 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083411 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:04:20.083419 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:04:20.083425 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:04:20.083432 | orchestrator | 2025-05-04 01:04:20.083438 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-04 01:04:20.083444 | orchestrator | Sunday 04 May 2025 01:01:46 +0000 (0:00:05.192) 0:02:06.625 ************ 2025-05-04 01:04:20.083451 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083457 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083463 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083470 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083476 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083482 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083489 | orchestrator | 2025-05-04 01:04:20.083495 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-04 01:04:20.083501 | orchestrator | Sunday 04 May 2025 01:01:48 +0000 (0:00:01.844) 0:02:08.469 ************ 2025-05-04 01:04:20.083508 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083514 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083520 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083541 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083548 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083554 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083561 | orchestrator | 2025-05-04 01:04:20.083567 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-04 01:04:20.083573 | orchestrator | Sunday 04 May 2025 01:01:50 +0000 (0:00:02.304) 0:02:10.774 ************ 2025-05-04 01:04:20.083580 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083586 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083592 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083599 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083609 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083616 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083622 | orchestrator | 2025-05-04 01:04:20.083628 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-04 01:04:20.083635 | orchestrator | Sunday 04 May 2025 01:01:54 +0000 (0:00:04.231) 0:02:15.005 ************ 2025-05-04 01:04:20.083641 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083647 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083653 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083660 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083666 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083672 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083679 | orchestrator | 2025-05-04 01:04:20.083685 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-04 01:04:20.083693 | orchestrator | Sunday 04 May 2025 01:01:57 +0000 (0:00:03.272) 0:02:18.278 ************ 2025-05-04 01:04:20.083700 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083707 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083714 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083721 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083729 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083736 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083743 | orchestrator | 2025-05-04 01:04:20.083750 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-04 01:04:20.083757 | orchestrator | Sunday 04 May 2025 01:01:59 +0000 (0:00:02.002) 0:02:20.281 ************ 2025-05-04 01:04:20.083765 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083773 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083780 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083787 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083794 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.083800 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.083806 | orchestrator | 2025-05-04 01:04:20.083813 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-04 01:04:20.083819 | orchestrator | Sunday 04 May 2025 01:02:03 +0000 (0:00:03.795) 0:02:24.076 ************ 2025-05-04 01:04:20.083825 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.083832 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.083838 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.083985 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.083995 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.084001 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.084008 | orchestrator | 2025-05-04 01:04:20.084014 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-04 01:04:20.084021 | orchestrator | Sunday 04 May 2025 01:02:06 +0000 (0:00:03.216) 0:02:27.293 ************ 2025-05-04 01:04:20.084027 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.084033 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.084040 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.084046 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.084052 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.084061 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.084067 | orchestrator | 2025-05-04 01:04:20.084074 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-04 01:04:20.084080 | orchestrator | Sunday 04 May 2025 01:02:08 +0000 (0:00:01.978) 0:02:29.271 ************ 2025-05-04 01:04:20.084087 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-04 01:04:20.084093 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.084100 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-04 01:04:20.084106 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.084113 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-04 01:04:20.084123 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.084130 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-04 01:04:20.084136 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.084143 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-04 01:04:20.084149 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.084156 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-04 01:04:20.084162 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.084169 | orchestrator | 2025-05-04 01:04:20.084175 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-04 01:04:20.084181 | orchestrator | Sunday 04 May 2025 01:02:10 +0000 (0:00:02.101) 0:02:31.373 ************ 2025-05-04 01:04:20.084210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.084218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.084263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.084327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.084350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084376 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.084383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.084405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.084437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.084503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.084542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084559 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.084566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.084586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.084617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.084694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.084731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084748 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.084755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.084762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.084807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.084874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.084909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.084941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.084951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084958 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.084965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.084971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.084999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.085082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.085117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085134 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.085141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.085148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.085260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.085281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085311 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.085318 | orchestrator | 2025-05-04 01:04:20.085324 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-04 01:04:20.085331 | orchestrator | Sunday 04 May 2025 01:02:12 +0000 (0:00:02.009) 0:02:33.383 ************ 2025-05-04 01:04:20.085337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.085344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.085351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-04 01:04:20.085508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.085586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.085643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.085687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-04 01:04:20.085724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.085755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.085805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-04 01:04:20.085842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.085849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.085926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.085948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.085955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.085970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.085984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.085991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.085997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.086007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.086028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.086044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.086060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.086067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.086080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.086088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-04 01:04:20.086104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:04:20.086117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:04:20.086123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-04 01:04:20.086139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-04 01:04:20.086148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-04 01:04:20.086154 | orchestrator | 2025-05-04 01:04:20.086160 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-04 01:04:20.086166 | orchestrator | Sunday 04 May 2025 01:02:16 +0000 (0:00:03.213) 0:02:36.596 ************ 2025-05-04 01:04:20.086172 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:20.086179 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:20.086185 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:20.086191 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:04:20.086197 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:04:20.086203 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:04:20.086209 | orchestrator | 2025-05-04 01:04:20.086215 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-04 01:04:20.086221 | orchestrator | Sunday 04 May 2025 01:02:16 +0000 (0:00:00.751) 0:02:37.347 ************ 2025-05-04 01:04:20.086227 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:20.086233 | orchestrator | 2025-05-04 01:04:20.086239 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-04 01:04:20.086245 | orchestrator | Sunday 04 May 2025 01:02:19 +0000 (0:00:02.470) 0:02:39.818 ************ 2025-05-04 01:04:20.086251 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:20.086257 | orchestrator | 2025-05-04 01:04:20.086263 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-04 01:04:20.086269 | orchestrator | Sunday 04 May 2025 01:02:21 +0000 (0:00:02.161) 0:02:41.979 ************ 2025-05-04 01:04:20.086275 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:20.086281 | orchestrator | 2025-05-04 01:04:20.086287 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-04 01:04:20.086293 | orchestrator | Sunday 04 May 2025 01:02:59 +0000 (0:00:38.344) 0:03:20.324 ************ 2025-05-04 01:04:20.086299 | orchestrator | 2025-05-04 01:04:20.086305 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-04 01:04:20.086311 | orchestrator | Sunday 04 May 2025 01:02:59 +0000 (0:00:00.067) 0:03:20.392 ************ 2025-05-04 01:04:20.086317 | orchestrator | 2025-05-04 01:04:20.086323 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-04 01:04:20.086329 | orchestrator | Sunday 04 May 2025 01:03:00 +0000 (0:00:00.252) 0:03:20.645 ************ 2025-05-04 01:04:20.086335 | orchestrator | 2025-05-04 01:04:20.086341 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-04 01:04:20.086347 | orchestrator | Sunday 04 May 2025 01:03:00 +0000 (0:00:00.066) 0:03:20.712 ************ 2025-05-04 01:04:20.086353 | orchestrator | 2025-05-04 01:04:20.086359 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-04 01:04:20.086365 | orchestrator | Sunday 04 May 2025 01:03:00 +0000 (0:00:00.053) 0:03:20.765 ************ 2025-05-04 01:04:20.086371 | orchestrator | 2025-05-04 01:04:20.086377 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-04 01:04:20.086384 | orchestrator | Sunday 04 May 2025 01:03:00 +0000 (0:00:00.052) 0:03:20.818 ************ 2025-05-04 01:04:20.086390 | orchestrator | 2025-05-04 01:04:20.086396 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-04 01:04:20.086402 | orchestrator | Sunday 04 May 2025 01:03:00 +0000 (0:00:00.280) 0:03:21.098 ************ 2025-05-04 01:04:20.086408 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:20.086415 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:04:20.086421 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:04:20.086427 | orchestrator | 2025-05-04 01:04:20.086433 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-04 01:04:20.086441 | orchestrator | Sunday 04 May 2025 01:03:26 +0000 (0:00:25.614) 0:03:46.712 ************ 2025-05-04 01:04:23.112733 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:04:23.112863 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:04:23.112930 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:04:23.112948 | orchestrator | 2025-05-04 01:04:23.112964 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:04:23.112982 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-04 01:04:23.112998 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-04 01:04:23.113013 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-04 01:04:23.113027 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-04 01:04:23.113041 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-04 01:04:23.113197 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-04 01:04:23.113213 | orchestrator | 2025-05-04 01:04:23.113239 | orchestrator | 2025-05-04 01:04:23.113254 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:04:23.113269 | orchestrator | Sunday 04 May 2025 01:04:17 +0000 (0:00:51.256) 0:04:37.968 ************ 2025-05-04 01:04:23.113283 | orchestrator | =============================================================================== 2025-05-04 01:04:23.113298 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.26s 2025-05-04 01:04:23.113312 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.34s 2025-05-04 01:04:23.113326 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.61s 2025-05-04 01:04:23.113341 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.13s 2025-05-04 01:04:23.113376 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.28s 2025-05-04 01:04:23.113392 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.49s 2025-05-04 01:04:23.113406 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.60s 2025-05-04 01:04:23.113420 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.19s 2025-05-04 01:04:23.113434 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.95s 2025-05-04 01:04:23.113448 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.46s 2025-05-04 01:04:23.113462 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.32s 2025-05-04 01:04:23.113477 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.23s 2025-05-04 01:04:23.113521 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.23s 2025-05-04 01:04:23.113536 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.21s 2025-05-04 01:04:23.113550 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.10s 2025-05-04 01:04:23.113632 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.80s 2025-05-04 01:04:23.113650 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.72s 2025-05-04 01:04:23.113665 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.71s 2025-05-04 01:04:23.113680 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.34s 2025-05-04 01:04:23.113695 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.31s 2025-05-04 01:04:23.113709 | orchestrator | 2025-05-04 01:04:20 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:23.113725 | orchestrator | 2025-05-04 01:04:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:23.113739 | orchestrator | 2025-05-04 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:23.113772 | orchestrator | 2025-05-04 01:04:23 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:23.116687 | orchestrator | 2025-05-04 01:04:23 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:23.116718 | orchestrator | 2025-05-04 01:04:23 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:23.116733 | orchestrator | 2025-05-04 01:04:23 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:23.116755 | orchestrator | 2025-05-04 01:04:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:26.170350 | orchestrator | 2025-05-04 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:26.170528 | orchestrator | 2025-05-04 01:04:26 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:26.171093 | orchestrator | 2025-05-04 01:04:26 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:26.171422 | orchestrator | 2025-05-04 01:04:26 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:26.171453 | orchestrator | 2025-05-04 01:04:26 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:26.172750 | orchestrator | 2025-05-04 01:04:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:29.222416 | orchestrator | 2025-05-04 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:29.222800 | orchestrator | 2025-05-04 01:04:29 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:29.222864 | orchestrator | 2025-05-04 01:04:29 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:29.224756 | orchestrator | 2025-05-04 01:04:29 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:29.225263 | orchestrator | 2025-05-04 01:04:29 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:29.226717 | orchestrator | 2025-05-04 01:04:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:32.280048 | orchestrator | 2025-05-04 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:32.281060 | orchestrator | 2025-05-04 01:04:32 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:32.282473 | orchestrator | 2025-05-04 01:04:32 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:32.282541 | orchestrator | 2025-05-04 01:04:32 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:32.282558 | orchestrator | 2025-05-04 01:04:32 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:32.282581 | orchestrator | 2025-05-04 01:04:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:35.319585 | orchestrator | 2025-05-04 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:35.319742 | orchestrator | 2025-05-04 01:04:35 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:35.322757 | orchestrator | 2025-05-04 01:04:35 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:35.323334 | orchestrator | 2025-05-04 01:04:35 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:35.324179 | orchestrator | 2025-05-04 01:04:35 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:35.324723 | orchestrator | 2025-05-04 01:04:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:38.360956 | orchestrator | 2025-05-04 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:38.361086 | orchestrator | 2025-05-04 01:04:38 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:38.362403 | orchestrator | 2025-05-04 01:04:38 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:38.365178 | orchestrator | 2025-05-04 01:04:38 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:38.367776 | orchestrator | 2025-05-04 01:04:38 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:38.370227 | orchestrator | 2025-05-04 01:04:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:41.427131 | orchestrator | 2025-05-04 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:41.427260 | orchestrator | 2025-05-04 01:04:41 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:41.429081 | orchestrator | 2025-05-04 01:04:41 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:41.431058 | orchestrator | 2025-05-04 01:04:41 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:41.432960 | orchestrator | 2025-05-04 01:04:41 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state STARTED 2025-05-04 01:04:41.434722 | orchestrator | 2025-05-04 01:04:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:44.490378 | orchestrator | 2025-05-04 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:44.490526 | orchestrator | 2025-05-04 01:04:44 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:44.492596 | orchestrator | 2025-05-04 01:04:44 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:44.494365 | orchestrator | 2025-05-04 01:04:44 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:44.498440 | orchestrator | 2025-05-04 01:04:44.498495 | orchestrator | 2025-05-04 01:04:44.498511 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:04:44.498526 | orchestrator | 2025-05-04 01:04:44.498828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:04:44.498851 | orchestrator | Sunday 04 May 2025 01:02:49 +0000 (0:00:00.338) 0:00:00.338 ************ 2025-05-04 01:04:44.498937 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:04:44.498957 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:04:44.498971 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:04:44.498986 | orchestrator | 2025-05-04 01:04:44.499001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:04:44.499016 | orchestrator | Sunday 04 May 2025 01:02:49 +0000 (0:00:00.371) 0:00:00.709 ************ 2025-05-04 01:04:44.499030 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-04 01:04:44.499045 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-04 01:04:44.499059 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-04 01:04:44.499073 | orchestrator | 2025-05-04 01:04:44.499088 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-04 01:04:44.499102 | orchestrator | 2025-05-04 01:04:44.499116 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-04 01:04:44.499131 | orchestrator | Sunday 04 May 2025 01:02:50 +0000 (0:00:00.274) 0:00:00.984 ************ 2025-05-04 01:04:44.499145 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:04:44.499162 | orchestrator | 2025-05-04 01:04:44.499176 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-04 01:04:44.499191 | orchestrator | Sunday 04 May 2025 01:02:50 +0000 (0:00:00.626) 0:00:01.610 ************ 2025-05-04 01:04:44.499205 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-04 01:04:44.499220 | orchestrator | 2025-05-04 01:04:44.499234 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-04 01:04:44.499248 | orchestrator | Sunday 04 May 2025 01:02:54 +0000 (0:00:03.411) 0:00:05.022 ************ 2025-05-04 01:04:44.499262 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-04 01:04:44.499295 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-04 01:04:44.499310 | orchestrator | 2025-05-04 01:04:44.499325 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-04 01:04:44.499339 | orchestrator | Sunday 04 May 2025 01:03:00 +0000 (0:00:06.512) 0:00:11.534 ************ 2025-05-04 01:04:44.499357 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:04:44.499384 | orchestrator | 2025-05-04 01:04:44.499408 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-04 01:04:44.499437 | orchestrator | Sunday 04 May 2025 01:03:03 +0000 (0:00:03.368) 0:00:14.903 ************ 2025-05-04 01:04:44.499459 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:04:44.499484 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-04 01:04:44.499508 | orchestrator | 2025-05-04 01:04:44.499533 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-04 01:04:44.499557 | orchestrator | Sunday 04 May 2025 01:03:07 +0000 (0:00:03.827) 0:00:18.730 ************ 2025-05-04 01:04:44.499581 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:04:44.499607 | orchestrator | 2025-05-04 01:04:44.499634 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-04 01:04:44.499663 | orchestrator | Sunday 04 May 2025 01:03:11 +0000 (0:00:03.357) 0:00:22.087 ************ 2025-05-04 01:04:44.499683 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-04 01:04:44.499699 | orchestrator | 2025-05-04 01:04:44.499715 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-04 01:04:44.499731 | orchestrator | Sunday 04 May 2025 01:03:15 +0000 (0:00:04.135) 0:00:26.223 ************ 2025-05-04 01:04:44.499747 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.499764 | orchestrator | 2025-05-04 01:04:44.499780 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-04 01:04:44.499795 | orchestrator | Sunday 04 May 2025 01:03:18 +0000 (0:00:03.219) 0:00:29.443 ************ 2025-05-04 01:04:44.499823 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.499838 | orchestrator | 2025-05-04 01:04:44.499852 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-04 01:04:44.499866 | orchestrator | Sunday 04 May 2025 01:03:22 +0000 (0:00:04.064) 0:00:33.508 ************ 2025-05-04 01:04:44.499905 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.499921 | orchestrator | 2025-05-04 01:04:44.499935 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-04 01:04:44.499949 | orchestrator | Sunday 04 May 2025 01:03:26 +0000 (0:00:03.592) 0:00:37.100 ************ 2025-05-04 01:04:44.499981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.500001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.500017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.500033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.500056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.500089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.500105 | orchestrator | 2025-05-04 01:04:44.500119 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-04 01:04:44.500134 | orchestrator | Sunday 04 May 2025 01:03:28 +0000 (0:00:02.767) 0:00:39.868 ************ 2025-05-04 01:04:44.500148 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.500163 | orchestrator | 2025-05-04 01:04:44.500177 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-04 01:04:44.500192 | orchestrator | Sunday 04 May 2025 01:03:29 +0000 (0:00:00.157) 0:00:40.026 ************ 2025-05-04 01:04:44.500206 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.500220 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.500235 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.500250 | orchestrator | 2025-05-04 01:04:44.500264 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-04 01:04:44.500278 | orchestrator | Sunday 04 May 2025 01:03:29 +0000 (0:00:00.640) 0:00:40.667 ************ 2025-05-04 01:04:44.500292 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:04:44.500306 | orchestrator | 2025-05-04 01:04:44.500321 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-04 01:04:44.500335 | orchestrator | Sunday 04 May 2025 01:03:30 +0000 (0:00:00.569) 0:00:41.236 ************ 2025-05-04 01:04:44.500350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.500366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.500388 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.500403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.500426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.500442 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.500497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.500514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.500537 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.500551 | orchestrator | 2025-05-04 01:04:44.500566 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-04 01:04:44.500580 | orchestrator | Sunday 04 May 2025 01:03:31 +0000 (0:00:01.314) 0:00:42.550 ************ 2025-05-04 01:04:44.500595 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.500609 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.500623 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.500637 | orchestrator | 2025-05-04 01:04:44.500652 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-04 01:04:44.500666 | orchestrator | Sunday 04 May 2025 01:03:31 +0000 (0:00:00.293) 0:00:42.843 ************ 2025-05-04 01:04:44.500681 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:04:44.500695 | orchestrator | 2025-05-04 01:04:44.500710 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-04 01:04:44.500724 | orchestrator | Sunday 04 May 2025 01:03:33 +0000 (0:00:01.161) 0:00:44.005 ************ 2025-05-04 01:04:44.500738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.500761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.500788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.500810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.500826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.500841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.500855 | orchestrator | 2025-05-04 01:04:44.500869 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-04 01:04:44.500924 | orchestrator | Sunday 04 May 2025 01:03:36 +0000 (0:00:03.401) 0:00:47.407 ************ 2025-05-04 01:04:44.500950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.500980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.501004 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.501019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.501035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.501050 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.501064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.501219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.501242 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.501257 | orchestrator | 2025-05-04 01:04:44.501272 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-04 01:04:44.501298 | orchestrator | Sunday 04 May 2025 01:03:37 +0000 (0:00:00.731) 0:00:48.138 ************ 2025-05-04 01:04:44.501333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.501350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.501365 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.501380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.501402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.501418 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.501443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.501466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.501481 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.501501 | orchestrator | 2025-05-04 01:04:44.501516 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-04 01:04:44.501534 | orchestrator | Sunday 04 May 2025 01:03:38 +0000 (0:00:01.128) 0:00:49.267 ************ 2025-05-04 01:04:44.501549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.501564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.501587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DU2025-05-04 01:04:44 | INFO  | Task 47c61ec9-f5e3-4af4-9265-c661f6cbba48 is in state SUCCESS 2025-05-04 01:04:44.501614 | orchestrator | MMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.501637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.501653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.501667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.501682 | orchestrator | 2025-05-04 01:04:44.501696 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-04 01:04:44.501711 | orchestrator | Sunday 04 May 2025 01:03:41 +0000 (0:00:02.880) 0:00:52.148 ************ 2025-05-04 01:04:44.501743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.501759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.501781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.501797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.501812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.501836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.501860 | orchestrator | 2025-05-04 01:04:44.501943 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-04 01:04:44.501963 | orchestrator | Sunday 04 May 2025 01:03:48 +0000 (0:00:07.122) 0:00:59.270 ************ 2025-05-04 01:04:44.501978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.501994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.502009 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.502104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.502120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.502135 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.502173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-04 01:04:44.502250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:04:44.502267 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.502282 | orchestrator | 2025-05-04 01:04:44.502296 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-04 01:04:44.502311 | orchestrator | Sunday 04 May 2025 01:03:49 +0000 (0:00:01.404) 0:01:00.674 ************ 2025-05-04 01:04:44.502325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.502340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.502368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-04 01:04:44.502400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.502416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.502431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:04:44.502446 | orchestrator | 2025-05-04 01:04:44.502460 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-04 01:04:44.502474 | orchestrator | Sunday 04 May 2025 01:03:52 +0000 (0:00:02.831) 0:01:03.506 ************ 2025-05-04 01:04:44.502489 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:04:44.502502 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:04:44.502515 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:04:44.502527 | orchestrator | 2025-05-04 01:04:44.502540 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-04 01:04:44.502552 | orchestrator | Sunday 04 May 2025 01:03:52 +0000 (0:00:00.243) 0:01:03.749 ************ 2025-05-04 01:04:44.502565 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.502577 | orchestrator | 2025-05-04 01:04:44.502590 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-04 01:04:44.502603 | orchestrator | Sunday 04 May 2025 01:03:55 +0000 (0:00:02.595) 0:01:06.345 ************ 2025-05-04 01:04:44.502615 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.502628 | orchestrator | 2025-05-04 01:04:44.502641 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-04 01:04:44.502664 | orchestrator | Sunday 04 May 2025 01:03:57 +0000 (0:00:02.199) 0:01:08.544 ************ 2025-05-04 01:04:44.502677 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.502690 | orchestrator | 2025-05-04 01:04:44.502702 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-04 01:04:44.502715 | orchestrator | Sunday 04 May 2025 01:04:12 +0000 (0:00:14.923) 0:01:23.468 ************ 2025-05-04 01:04:44.502727 | orchestrator | 2025-05-04 01:04:44.502740 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-04 01:04:44.502752 | orchestrator | Sunday 04 May 2025 01:04:12 +0000 (0:00:00.045) 0:01:23.513 ************ 2025-05-04 01:04:44.502765 | orchestrator | 2025-05-04 01:04:44.502777 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-04 01:04:44.502789 | orchestrator | Sunday 04 May 2025 01:04:12 +0000 (0:00:00.108) 0:01:23.622 ************ 2025-05-04 01:04:44.502802 | orchestrator | 2025-05-04 01:04:44.502815 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-04 01:04:44.502827 | orchestrator | Sunday 04 May 2025 01:04:12 +0000 (0:00:00.042) 0:01:23.665 ************ 2025-05-04 01:04:44.502840 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.502852 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:04:44.502865 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:04:44.502906 | orchestrator | 2025-05-04 01:04:44.502922 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-04 01:04:44.502947 | orchestrator | Sunday 04 May 2025 01:04:29 +0000 (0:00:16.580) 0:01:40.245 ************ 2025-05-04 01:04:44.502961 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:04:44.502985 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:04:44.503005 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:04:47.563824 | orchestrator | 2025-05-04 01:04:47.563983 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:04:47.564002 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-04 01:04:47.564016 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:04:47.564027 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:04:47.564038 | orchestrator | 2025-05-04 01:04:47.564048 | orchestrator | 2025-05-04 01:04:47.564059 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:04:47.564070 | orchestrator | Sunday 04 May 2025 01:04:42 +0000 (0:00:12.843) 0:01:53.088 ************ 2025-05-04 01:04:47.564080 | orchestrator | =============================================================================== 2025-05-04 01:04:47.564090 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.58s 2025-05-04 01:04:47.564101 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.92s 2025-05-04 01:04:47.564111 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.84s 2025-05-04 01:04:47.564143 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.12s 2025-05-04 01:04:47.564154 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.51s 2025-05-04 01:04:47.564165 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.14s 2025-05-04 01:04:47.564175 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.06s 2025-05-04 01:04:47.564185 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.83s 2025-05-04 01:04:47.564195 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.59s 2025-05-04 01:04:47.564206 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.41s 2025-05-04 01:04:47.564216 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.40s 2025-05-04 01:04:47.564252 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.37s 2025-05-04 01:04:47.564263 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.36s 2025-05-04 01:04:47.564273 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.22s 2025-05-04 01:04:47.564284 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.88s 2025-05-04 01:04:47.564294 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.83s 2025-05-04 01:04:47.564304 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.77s 2025-05-04 01:04:47.564314 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.60s 2025-05-04 01:04:47.564326 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.20s 2025-05-04 01:04:47.564336 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.40s 2025-05-04 01:04:47.564349 | orchestrator | 2025-05-04 01:04:44 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:04:47.564362 | orchestrator | 2025-05-04 01:04:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:47.564374 | orchestrator | 2025-05-04 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:47.564401 | orchestrator | 2025-05-04 01:04:47 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:47.564768 | orchestrator | 2025-05-04 01:04:47 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:47.568169 | orchestrator | 2025-05-04 01:04:47 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:47.569959 | orchestrator | 2025-05-04 01:04:47 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:04:47.571014 | orchestrator | 2025-05-04 01:04:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:50.636745 | orchestrator | 2025-05-04 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:50.636956 | orchestrator | 2025-05-04 01:04:50 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state STARTED 2025-05-04 01:04:50.640494 | orchestrator | 2025-05-04 01:04:50 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:50.642340 | orchestrator | 2025-05-04 01:04:50 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:50.643630 | orchestrator | 2025-05-04 01:04:50 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:04:50.645739 | orchestrator | 2025-05-04 01:04:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:53.689226 | orchestrator | 2025-05-04 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:53.689385 | orchestrator | 2025-05-04 01:04:53 | INFO  | Task ed2d72d1-b9b7-4692-8387-a985444cdff5 is in state SUCCESS 2025-05-04 01:04:53.690513 | orchestrator | 2025-05-04 01:04:53 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:53.690547 | orchestrator | 2025-05-04 01:04:53 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:53.692240 | orchestrator | 2025-05-04 01:04:53 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:04:53.693293 | orchestrator | 2025-05-04 01:04:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:53.693418 | orchestrator | 2025-05-04 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:56.738373 | orchestrator | 2025-05-04 01:04:56 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:56.739545 | orchestrator | 2025-05-04 01:04:56 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:56.740124 | orchestrator | 2025-05-04 01:04:56 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:04:56.742462 | orchestrator | 2025-05-04 01:04:56 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:04:56.743482 | orchestrator | 2025-05-04 01:04:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:04:59.783376 | orchestrator | 2025-05-04 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:04:59.783535 | orchestrator | 2025-05-04 01:04:59 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:04:59.785092 | orchestrator | 2025-05-04 01:04:59 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:04:59.786684 | orchestrator | 2025-05-04 01:04:59 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:04:59.788050 | orchestrator | 2025-05-04 01:04:59 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:04:59.791056 | orchestrator | 2025-05-04 01:04:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:02.845518 | orchestrator | 2025-05-04 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:02.845670 | orchestrator | 2025-05-04 01:05:02 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:02.849432 | orchestrator | 2025-05-04 01:05:02 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:02.850586 | orchestrator | 2025-05-04 01:05:02 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:02.852047 | orchestrator | 2025-05-04 01:05:02 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:02.853304 | orchestrator | 2025-05-04 01:05:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:02.853424 | orchestrator | 2025-05-04 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:05.894457 | orchestrator | 2025-05-04 01:05:05 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:05.895987 | orchestrator | 2025-05-04 01:05:05 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:05.897443 | orchestrator | 2025-05-04 01:05:05 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:05.898638 | orchestrator | 2025-05-04 01:05:05 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:05.900361 | orchestrator | 2025-05-04 01:05:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:08.941614 | orchestrator | 2025-05-04 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:08.941768 | orchestrator | 2025-05-04 01:05:08 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:08.945462 | orchestrator | 2025-05-04 01:05:08 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:08.945503 | orchestrator | 2025-05-04 01:05:08 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:08.946807 | orchestrator | 2025-05-04 01:05:08 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:11.986339 | orchestrator | 2025-05-04 01:05:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:11.986504 | orchestrator | 2025-05-04 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:11.986642 | orchestrator | 2025-05-04 01:05:11 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:11.987348 | orchestrator | 2025-05-04 01:05:11 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:11.987376 | orchestrator | 2025-05-04 01:05:11 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:11.988209 | orchestrator | 2025-05-04 01:05:11 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:11.989185 | orchestrator | 2025-05-04 01:05:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:15.030249 | orchestrator | 2025-05-04 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:15.030375 | orchestrator | 2025-05-04 01:05:15 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:15.031820 | orchestrator | 2025-05-04 01:05:15 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:15.032274 | orchestrator | 2025-05-04 01:05:15 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:15.032846 | orchestrator | 2025-05-04 01:05:15 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:15.033537 | orchestrator | 2025-05-04 01:05:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:18.070211 | orchestrator | 2025-05-04 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:18.070338 | orchestrator | 2025-05-04 01:05:18 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:18.071250 | orchestrator | 2025-05-04 01:05:18 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:18.071696 | orchestrator | 2025-05-04 01:05:18 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:18.072346 | orchestrator | 2025-05-04 01:05:18 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:18.073090 | orchestrator | 2025-05-04 01:05:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:21.106429 | orchestrator | 2025-05-04 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:21.106675 | orchestrator | 2025-05-04 01:05:21 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:21.108098 | orchestrator | 2025-05-04 01:05:21 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:21.108139 | orchestrator | 2025-05-04 01:05:21 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:21.108813 | orchestrator | 2025-05-04 01:05:21 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:21.109303 | orchestrator | 2025-05-04 01:05:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:24.143149 | orchestrator | 2025-05-04 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:24.143380 | orchestrator | 2025-05-04 01:05:24 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:24.143945 | orchestrator | 2025-05-04 01:05:24 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:24.143981 | orchestrator | 2025-05-04 01:05:24 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:24.144503 | orchestrator | 2025-05-04 01:05:24 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:24.145058 | orchestrator | 2025-05-04 01:05:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:24.146813 | orchestrator | 2025-05-04 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:27.180237 | orchestrator | 2025-05-04 01:05:27 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:27.181414 | orchestrator | 2025-05-04 01:05:27 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:27.182797 | orchestrator | 2025-05-04 01:05:27 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:27.184003 | orchestrator | 2025-05-04 01:05:27 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:27.185437 | orchestrator | 2025-05-04 01:05:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:27.185891 | orchestrator | 2025-05-04 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:30.234196 | orchestrator | 2025-05-04 01:05:30 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:33.260253 | orchestrator | 2025-05-04 01:05:30 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:33.260390 | orchestrator | 2025-05-04 01:05:30 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:33.260410 | orchestrator | 2025-05-04 01:05:30 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:33.260426 | orchestrator | 2025-05-04 01:05:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:33.260442 | orchestrator | 2025-05-04 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:33.260475 | orchestrator | 2025-05-04 01:05:33 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:33.262961 | orchestrator | 2025-05-04 01:05:33 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:33.263010 | orchestrator | 2025-05-04 01:05:33 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:33.263565 | orchestrator | 2025-05-04 01:05:33 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:33.264301 | orchestrator | 2025-05-04 01:05:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:36.306730 | orchestrator | 2025-05-04 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:36.306857 | orchestrator | 2025-05-04 01:05:36 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:36.307243 | orchestrator | 2025-05-04 01:05:36 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:36.307282 | orchestrator | 2025-05-04 01:05:36 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:36.307672 | orchestrator | 2025-05-04 01:05:36 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:36.308227 | orchestrator | 2025-05-04 01:05:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:39.337299 | orchestrator | 2025-05-04 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:39.337425 | orchestrator | 2025-05-04 01:05:39 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:39.338937 | orchestrator | 2025-05-04 01:05:39 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:39.339364 | orchestrator | 2025-05-04 01:05:39 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:39.341652 | orchestrator | 2025-05-04 01:05:39 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:39.343395 | orchestrator | 2025-05-04 01:05:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:42.400395 | orchestrator | 2025-05-04 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:42.400514 | orchestrator | 2025-05-04 01:05:42 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:42.402107 | orchestrator | 2025-05-04 01:05:42 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:42.403410 | orchestrator | 2025-05-04 01:05:42 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:42.406127 | orchestrator | 2025-05-04 01:05:42 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:42.406712 | orchestrator | 2025-05-04 01:05:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:45.448122 | orchestrator | 2025-05-04 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:45.448222 | orchestrator | 2025-05-04 01:05:45 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:45.451773 | orchestrator | 2025-05-04 01:05:45 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:45.453849 | orchestrator | 2025-05-04 01:05:45 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:45.456278 | orchestrator | 2025-05-04 01:05:45 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:45.456675 | orchestrator | 2025-05-04 01:05:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:45.456721 | orchestrator | 2025-05-04 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:48.493189 | orchestrator | 2025-05-04 01:05:48 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:48.495726 | orchestrator | 2025-05-04 01:05:48 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:48.497146 | orchestrator | 2025-05-04 01:05:48 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:48.498558 | orchestrator | 2025-05-04 01:05:48 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:48.500199 | orchestrator | 2025-05-04 01:05:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:48.500850 | orchestrator | 2025-05-04 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:51.550832 | orchestrator | 2025-05-04 01:05:51 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:51.551128 | orchestrator | 2025-05-04 01:05:51 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:51.552058 | orchestrator | 2025-05-04 01:05:51 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:51.552813 | orchestrator | 2025-05-04 01:05:51 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:51.553538 | orchestrator | 2025-05-04 01:05:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:51.553727 | orchestrator | 2025-05-04 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:54.590191 | orchestrator | 2025-05-04 01:05:54 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:54.590823 | orchestrator | 2025-05-04 01:05:54 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state STARTED 2025-05-04 01:05:54.590901 | orchestrator | 2025-05-04 01:05:54 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:54.591443 | orchestrator | 2025-05-04 01:05:54 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:54.593207 | orchestrator | 2025-05-04 01:05:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:05:57.623384 | orchestrator | 2025-05-04 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:05:57.623505 | orchestrator | 2025-05-04 01:05:57 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:05:57.623974 | orchestrator | 2025-05-04 01:05:57 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:05:57.624457 | orchestrator | 2025-05-04 01:05:57.624494 | orchestrator | 2025-05-04 01:05:57.624509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:05:57.624524 | orchestrator | 2025-05-04 01:05:57.624538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:05:57.624553 | orchestrator | Sunday 04 May 2025 01:04:20 +0000 (0:00:00.244) 0:00:00.244 ************ 2025-05-04 01:05:57.624567 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:05:57.624583 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:05:57.624597 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:05:57.624610 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:05:57.624624 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:05:57.624639 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:05:57.624653 | orchestrator | ok: [testbed-manager] 2025-05-04 01:05:57.624671 | orchestrator | 2025-05-04 01:05:57.624695 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:05:57.624720 | orchestrator | Sunday 04 May 2025 01:04:21 +0000 (0:00:00.876) 0:00:01.121 ************ 2025-05-04 01:05:57.624746 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.624770 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.624795 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.624820 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.625002 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.625029 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.625045 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-04 01:05:57.625060 | orchestrator | 2025-05-04 01:05:57.625075 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-04 01:05:57.625089 | orchestrator | 2025-05-04 01:05:57.625103 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-04 01:05:57.625117 | orchestrator | Sunday 04 May 2025 01:04:22 +0000 (0:00:00.981) 0:00:02.103 ************ 2025-05-04 01:05:57.625132 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-05-04 01:05:57.625148 | orchestrator | 2025-05-04 01:05:57.625167 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-04 01:05:57.625191 | orchestrator | Sunday 04 May 2025 01:04:24 +0000 (0:00:01.581) 0:00:03.684 ************ 2025-05-04 01:05:57.625216 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-05-04 01:05:57.625241 | orchestrator | 2025-05-04 01:05:57.625266 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-04 01:05:57.625292 | orchestrator | Sunday 04 May 2025 01:04:27 +0000 (0:00:03.730) 0:00:07.415 ************ 2025-05-04 01:05:57.625345 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-04 01:05:57.625371 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-04 01:05:57.625387 | orchestrator | 2025-05-04 01:05:57.625401 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-04 01:05:57.625415 | orchestrator | Sunday 04 May 2025 01:04:34 +0000 (0:00:06.243) 0:00:13.658 ************ 2025-05-04 01:05:57.625429 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-05-04 01:05:57.625443 | orchestrator | 2025-05-04 01:05:57.625458 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-04 01:05:57.625478 | orchestrator | Sunday 04 May 2025 01:04:37 +0000 (0:00:03.123) 0:00:16.782 ************ 2025-05-04 01:05:57.625493 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:05:57.625507 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-05-04 01:05:57.625521 | orchestrator | 2025-05-04 01:05:57.625535 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-04 01:05:57.625549 | orchestrator | Sunday 04 May 2025 01:04:41 +0000 (0:00:03.788) 0:00:20.571 ************ 2025-05-04 01:05:57.625563 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-05-04 01:05:57.625577 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-05-04 01:05:57.625591 | orchestrator | 2025-05-04 01:05:57.625606 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-04 01:05:57.625628 | orchestrator | Sunday 04 May 2025 01:04:46 +0000 (0:00:05.861) 0:00:26.432 ************ 2025-05-04 01:05:57.625653 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-05-04 01:05:57.625679 | orchestrator | 2025-05-04 01:05:57.625703 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:05:57.625736 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.625763 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.625788 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.625813 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.626003 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.626172 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.627156 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:05:57.627198 | orchestrator | 2025-05-04 01:05:57.627222 | orchestrator | 2025-05-04 01:05:57.627245 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:05:57.627265 | orchestrator | Sunday 04 May 2025 01:04:52 +0000 (0:00:05.741) 0:00:32.173 ************ 2025-05-04 01:05:57.627278 | orchestrator | =============================================================================== 2025-05-04 01:05:57.627291 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.24s 2025-05-04 01:05:57.627303 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.86s 2025-05-04 01:05:57.627316 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.74s 2025-05-04 01:05:57.627329 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.79s 2025-05-04 01:05:57.627354 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.73s 2025-05-04 01:05:57.627367 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.12s 2025-05-04 01:05:57.627380 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.58s 2025-05-04 01:05:57.627393 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-05-04 01:05:57.627405 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2025-05-04 01:05:57.627418 | orchestrator | 2025-05-04 01:05:57.627430 | orchestrator | 2025-05-04 01:05:57 | INFO  | Task 971138de-785d-4698-a911-0390b6d62daa is in state SUCCESS 2025-05-04 01:05:57.627444 | orchestrator | 2025-05-04 01:05:57 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:05:57.627457 | orchestrator | 2025-05-04 01:05:57 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:05:57.627477 | orchestrator | 2025-05-04 01:05:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:00.650586 | orchestrator | 2025-05-04 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:00.650706 | orchestrator | 2025-05-04 01:06:00 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:00.650936 | orchestrator | 2025-05-04 01:06:00 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:00.652041 | orchestrator | 2025-05-04 01:06:00 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:00.652711 | orchestrator | 2025-05-04 01:06:00 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:00.653474 | orchestrator | 2025-05-04 01:06:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:03.682441 | orchestrator | 2025-05-04 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:03.682578 | orchestrator | 2025-05-04 01:06:03 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:03.683545 | orchestrator | 2025-05-04 01:06:03 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:03.684149 | orchestrator | 2025-05-04 01:06:03 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:03.685133 | orchestrator | 2025-05-04 01:06:03 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:03.685554 | orchestrator | 2025-05-04 01:06:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:03.685783 | orchestrator | 2025-05-04 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:06.713508 | orchestrator | 2025-05-04 01:06:06 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:06.713733 | orchestrator | 2025-05-04 01:06:06 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:06.714498 | orchestrator | 2025-05-04 01:06:06 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:06.714832 | orchestrator | 2025-05-04 01:06:06 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:06.715617 | orchestrator | 2025-05-04 01:06:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:09.745984 | orchestrator | 2025-05-04 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:09.746160 | orchestrator | 2025-05-04 01:06:09 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:09.746798 | orchestrator | 2025-05-04 01:06:09 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:09.746904 | orchestrator | 2025-05-04 01:06:09 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:09.746940 | orchestrator | 2025-05-04 01:06:09 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:12.786587 | orchestrator | 2025-05-04 01:06:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:12.786691 | orchestrator | 2025-05-04 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:12.786727 | orchestrator | 2025-05-04 01:06:12 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:12.786929 | orchestrator | 2025-05-04 01:06:12 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:12.786956 | orchestrator | 2025-05-04 01:06:12 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:12.786976 | orchestrator | 2025-05-04 01:06:12 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:12.787644 | orchestrator | 2025-05-04 01:06:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:15.819583 | orchestrator | 2025-05-04 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:15.819677 | orchestrator | 2025-05-04 01:06:15 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:15.820621 | orchestrator | 2025-05-04 01:06:15 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:15.821491 | orchestrator | 2025-05-04 01:06:15 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:15.821580 | orchestrator | 2025-05-04 01:06:15 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:15.822184 | orchestrator | 2025-05-04 01:06:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:18.855418 | orchestrator | 2025-05-04 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:18.855542 | orchestrator | 2025-05-04 01:06:18 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:18.857365 | orchestrator | 2025-05-04 01:06:18 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:18.857843 | orchestrator | 2025-05-04 01:06:18 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:18.858450 | orchestrator | 2025-05-04 01:06:18 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:18.858935 | orchestrator | 2025-05-04 01:06:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:18.859041 | orchestrator | 2025-05-04 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:21.894185 | orchestrator | 2025-05-04 01:06:21 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:21.895117 | orchestrator | 2025-05-04 01:06:21 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:21.895175 | orchestrator | 2025-05-04 01:06:21 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:21.895667 | orchestrator | 2025-05-04 01:06:21 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:21.896203 | orchestrator | 2025-05-04 01:06:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:24.925815 | orchestrator | 2025-05-04 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:24.926299 | orchestrator | 2025-05-04 01:06:24 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:24.928152 | orchestrator | 2025-05-04 01:06:24 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:24.928195 | orchestrator | 2025-05-04 01:06:24 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:24.932919 | orchestrator | 2025-05-04 01:06:24 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:27.967160 | orchestrator | 2025-05-04 01:06:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:27.967274 | orchestrator | 2025-05-04 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:27.967309 | orchestrator | 2025-05-04 01:06:27 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:27.969939 | orchestrator | 2025-05-04 01:06:27 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:27.970372 | orchestrator | 2025-05-04 01:06:27 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:27.971286 | orchestrator | 2025-05-04 01:06:27 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:27.972022 | orchestrator | 2025-05-04 01:06:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:27.972260 | orchestrator | 2025-05-04 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:30.994984 | orchestrator | 2025-05-04 01:06:30 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:30.995253 | orchestrator | 2025-05-04 01:06:30 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:30.995838 | orchestrator | 2025-05-04 01:06:30 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:30.996584 | orchestrator | 2025-05-04 01:06:30 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:30.998446 | orchestrator | 2025-05-04 01:06:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:34.027923 | orchestrator | 2025-05-04 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:34.028060 | orchestrator | 2025-05-04 01:06:34 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:34.029340 | orchestrator | 2025-05-04 01:06:34 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:34.030128 | orchestrator | 2025-05-04 01:06:34 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:34.030694 | orchestrator | 2025-05-04 01:06:34 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:34.031487 | orchestrator | 2025-05-04 01:06:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:37.076032 | orchestrator | 2025-05-04 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:37.076147 | orchestrator | 2025-05-04 01:06:37 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:37.079504 | orchestrator | 2025-05-04 01:06:37 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:37.080989 | orchestrator | 2025-05-04 01:06:37 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:37.082388 | orchestrator | 2025-05-04 01:06:37 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:37.083699 | orchestrator | 2025-05-04 01:06:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:37.083834 | orchestrator | 2025-05-04 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:40.122375 | orchestrator | 2025-05-04 01:06:40 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:40.122791 | orchestrator | 2025-05-04 01:06:40 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:40.123156 | orchestrator | 2025-05-04 01:06:40 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:40.123894 | orchestrator | 2025-05-04 01:06:40 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:40.124402 | orchestrator | 2025-05-04 01:06:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:40.124738 | orchestrator | 2025-05-04 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:43.154829 | orchestrator | 2025-05-04 01:06:43 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:43.155050 | orchestrator | 2025-05-04 01:06:43 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:43.155075 | orchestrator | 2025-05-04 01:06:43 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:43.155572 | orchestrator | 2025-05-04 01:06:43 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:43.156146 | orchestrator | 2025-05-04 01:06:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:46.190402 | orchestrator | 2025-05-04 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:46.190531 | orchestrator | 2025-05-04 01:06:46 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:46.191803 | orchestrator | 2025-05-04 01:06:46 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:46.192079 | orchestrator | 2025-05-04 01:06:46 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:46.192181 | orchestrator | 2025-05-04 01:06:46 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:46.192204 | orchestrator | 2025-05-04 01:06:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:49.231649 | orchestrator | 2025-05-04 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:49.231901 | orchestrator | 2025-05-04 01:06:49 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:49.232624 | orchestrator | 2025-05-04 01:06:49 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:49.232658 | orchestrator | 2025-05-04 01:06:49 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:49.236757 | orchestrator | 2025-05-04 01:06:49 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:52.283236 | orchestrator | 2025-05-04 01:06:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:52.283349 | orchestrator | 2025-05-04 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:52.283384 | orchestrator | 2025-05-04 01:06:52 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:52.284626 | orchestrator | 2025-05-04 01:06:52 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:52.285187 | orchestrator | 2025-05-04 01:06:52 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:52.286139 | orchestrator | 2025-05-04 01:06:52 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:52.287085 | orchestrator | 2025-05-04 01:06:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:52.287262 | orchestrator | 2025-05-04 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:55.327570 | orchestrator | 2025-05-04 01:06:55 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:55.331033 | orchestrator | 2025-05-04 01:06:55 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:55.332631 | orchestrator | 2025-05-04 01:06:55 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:55.334806 | orchestrator | 2025-05-04 01:06:55 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:55.336405 | orchestrator | 2025-05-04 01:06:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:06:55.336475 | orchestrator | 2025-05-04 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:06:58.373347 | orchestrator | 2025-05-04 01:06:58 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:06:58.373520 | orchestrator | 2025-05-04 01:06:58 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:06:58.373943 | orchestrator | 2025-05-04 01:06:58 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:06:58.374509 | orchestrator | 2025-05-04 01:06:58 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:06:58.374998 | orchestrator | 2025-05-04 01:06:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:01.417172 | orchestrator | 2025-05-04 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:01.417349 | orchestrator | 2025-05-04 01:07:01 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:01.418298 | orchestrator | 2025-05-04 01:07:01 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:01.421490 | orchestrator | 2025-05-04 01:07:01 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:01.423041 | orchestrator | 2025-05-04 01:07:01 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:01.426959 | orchestrator | 2025-05-04 01:07:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:04.465702 | orchestrator | 2025-05-04 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:04.465904 | orchestrator | 2025-05-04 01:07:04 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:04.466640 | orchestrator | 2025-05-04 01:07:04 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:04.468741 | orchestrator | 2025-05-04 01:07:04 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:04.470122 | orchestrator | 2025-05-04 01:07:04 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:04.471238 | orchestrator | 2025-05-04 01:07:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:07.523262 | orchestrator | 2025-05-04 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:07.523395 | orchestrator | 2025-05-04 01:07:07 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:07.524732 | orchestrator | 2025-05-04 01:07:07 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:07.526102 | orchestrator | 2025-05-04 01:07:07 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:07.528399 | orchestrator | 2025-05-04 01:07:07 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:07.530433 | orchestrator | 2025-05-04 01:07:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:07.530982 | orchestrator | 2025-05-04 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:10.576874 | orchestrator | 2025-05-04 01:07:10 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:10.577966 | orchestrator | 2025-05-04 01:07:10 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:10.578082 | orchestrator | 2025-05-04 01:07:10 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:10.578377 | orchestrator | 2025-05-04 01:07:10 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:10.579625 | orchestrator | 2025-05-04 01:07:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:10.579769 | orchestrator | 2025-05-04 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:13.627787 | orchestrator | 2025-05-04 01:07:13 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:13.628935 | orchestrator | 2025-05-04 01:07:13 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:13.629005 | orchestrator | 2025-05-04 01:07:13 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:13.630212 | orchestrator | 2025-05-04 01:07:13 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:13.631036 | orchestrator | 2025-05-04 01:07:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:16.681990 | orchestrator | 2025-05-04 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:16.682196 | orchestrator | 2025-05-04 01:07:16 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:16.683292 | orchestrator | 2025-05-04 01:07:16 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:16.684644 | orchestrator | 2025-05-04 01:07:16 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:16.686214 | orchestrator | 2025-05-04 01:07:16 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:16.687827 | orchestrator | 2025-05-04 01:07:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:19.757043 | orchestrator | 2025-05-04 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:19.757224 | orchestrator | 2025-05-04 01:07:19 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:19.767046 | orchestrator | 2025-05-04 01:07:19 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:19.776168 | orchestrator | 2025-05-04 01:07:19 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:19.777444 | orchestrator | 2025-05-04 01:07:19 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:19.777913 | orchestrator | 2025-05-04 01:07:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:19.778121 | orchestrator | 2025-05-04 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:22.835928 | orchestrator | 2025-05-04 01:07:22 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:22.837496 | orchestrator | 2025-05-04 01:07:22 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:22.839430 | orchestrator | 2025-05-04 01:07:22 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:22.840331 | orchestrator | 2025-05-04 01:07:22 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:22.842699 | orchestrator | 2025-05-04 01:07:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:25.903902 | orchestrator | 2025-05-04 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:25.904031 | orchestrator | 2025-05-04 01:07:25 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:25.906978 | orchestrator | 2025-05-04 01:07:25 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:25.909894 | orchestrator | 2025-05-04 01:07:25 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:25.911928 | orchestrator | 2025-05-04 01:07:25 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:25.914628 | orchestrator | 2025-05-04 01:07:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:28.986381 | orchestrator | 2025-05-04 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:28.986541 | orchestrator | 2025-05-04 01:07:28 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:28.987074 | orchestrator | 2025-05-04 01:07:28 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:28.988543 | orchestrator | 2025-05-04 01:07:28 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:28.989952 | orchestrator | 2025-05-04 01:07:28 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:28.991073 | orchestrator | 2025-05-04 01:07:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:28.991152 | orchestrator | 2025-05-04 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:32.077016 | orchestrator | 2025-05-04 01:07:32 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:32.079156 | orchestrator | 2025-05-04 01:07:32 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:32.081705 | orchestrator | 2025-05-04 01:07:32 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:32.084273 | orchestrator | 2025-05-04 01:07:32 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:32.085897 | orchestrator | 2025-05-04 01:07:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:35.146930 | orchestrator | 2025-05-04 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:35.147087 | orchestrator | 2025-05-04 01:07:35 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:35.149038 | orchestrator | 2025-05-04 01:07:35 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:35.151077 | orchestrator | 2025-05-04 01:07:35 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:35.153244 | orchestrator | 2025-05-04 01:07:35 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:35.155095 | orchestrator | 2025-05-04 01:07:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:35.156390 | orchestrator | 2025-05-04 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:38.200350 | orchestrator | 2025-05-04 01:07:38 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state STARTED 2025-05-04 01:07:38.201184 | orchestrator | 2025-05-04 01:07:38 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:38.201227 | orchestrator | 2025-05-04 01:07:38 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:38.201785 | orchestrator | 2025-05-04 01:07:38 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:38.202383 | orchestrator | 2025-05-04 01:07:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:38.202469 | orchestrator | 2025-05-04 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:41.239331 | orchestrator | 2025-05-04 01:07:41 | INFO  | Task eacbf32a-bd9c-45b0-8f69-bea3ebbdc9d5 is in state SUCCESS 2025-05-04 01:07:41.240108 | orchestrator | 2025-05-04 01:07:41.240153 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-04 01:07:41.240169 | orchestrator | 2025-05-04 01:07:41.240185 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-04 01:07:41.240200 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.264) 0:00:00.264 ************ 2025-05-04 01:07:41.240214 | orchestrator | changed: [localhost] 2025-05-04 01:07:41.240299 | orchestrator | 2025-05-04 01:07:41.240316 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-04 01:07:41.240330 | orchestrator | Sunday 04 May 2025 00:59:40 +0000 (0:00:00.714) 0:00:00.979 ************ 2025-05-04 01:07:41.240375 | orchestrator | 2025-05-04 01:07:41.240392 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240406 | orchestrator | 2025-05-04 01:07:41.240445 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240562 | orchestrator | 2025-05-04 01:07:41.240578 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240592 | orchestrator | 2025-05-04 01:07:41.240606 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240621 | orchestrator | 2025-05-04 01:07:41.240635 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240649 | orchestrator | 2025-05-04 01:07:41.240663 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240677 | orchestrator | 2025-05-04 01:07:41.240691 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-04 01:07:41.240706 | orchestrator | changed: [localhost] 2025-05-04 01:07:41.240720 | orchestrator | 2025-05-04 01:07:41.240734 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-04 01:07:41.240763 | orchestrator | Sunday 04 May 2025 01:05:38 +0000 (0:05:57.214) 0:05:58.193 ************ 2025-05-04 01:07:41.240779 | orchestrator | changed: [localhost] 2025-05-04 01:07:41.240793 | orchestrator | 2025-05-04 01:07:41.240819 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:07:41.240859 | orchestrator | 2025-05-04 01:07:41.240874 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:07:41.240888 | orchestrator | Sunday 04 May 2025 01:05:51 +0000 (0:00:13.257) 0:06:11.450 ************ 2025-05-04 01:07:41.240902 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:07:41.240917 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:07:41.240946 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:07:41.241006 | orchestrator | 2025-05-04 01:07:41.241022 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:07:41.241037 | orchestrator | Sunday 04 May 2025 01:05:52 +0000 (0:00:00.927) 0:06:12.378 ************ 2025-05-04 01:07:41.241051 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-04 01:07:41.241066 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-04 01:07:41.241080 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-04 01:07:41.241095 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-04 01:07:41.241109 | orchestrator | 2025-05-04 01:07:41.241124 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-04 01:07:41.241138 | orchestrator | skipping: no hosts matched 2025-05-04 01:07:41.241158 | orchestrator | 2025-05-04 01:07:41.241173 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:07:41.241187 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:07:41.241205 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:07:41.241220 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:07:41.241235 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:07:41.241249 | orchestrator | 2025-05-04 01:07:41.241263 | orchestrator | 2025-05-04 01:07:41.241277 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:07:41.241292 | orchestrator | Sunday 04 May 2025 01:05:53 +0000 (0:00:01.534) 0:06:13.913 ************ 2025-05-04 01:07:41.241306 | orchestrator | =============================================================================== 2025-05-04 01:07:41.241320 | orchestrator | Download ironic-agent initramfs --------------------------------------- 357.21s 2025-05-04 01:07:41.241334 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.26s 2025-05-04 01:07:41.241348 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2025-05-04 01:07:41.241362 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-05-04 01:07:41.241376 | orchestrator | Ensure the destination directory exists --------------------------------- 0.71s 2025-05-04 01:07:41.241391 | orchestrator | 2025-05-04 01:07:41.241405 | orchestrator | 2025-05-04 01:07:41.241419 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:07:41.241433 | orchestrator | 2025-05-04 01:07:41.241447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:07:41.241462 | orchestrator | Sunday 04 May 2025 01:03:28 +0000 (0:00:00.422) 0:00:00.422 ************ 2025-05-04 01:07:41.241476 | orchestrator | ok: [testbed-manager] 2025-05-04 01:07:41.241490 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:07:41.241505 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:07:41.241519 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:07:41.241533 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:07:41.241547 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:07:41.241562 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:07:41.241576 | orchestrator | 2025-05-04 01:07:41.241590 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:07:41.241605 | orchestrator | Sunday 04 May 2025 01:03:29 +0000 (0:00:01.312) 0:00:01.735 ************ 2025-05-04 01:07:41.241630 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241645 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241659 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241673 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241695 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241709 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241724 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-04 01:07:41.241738 | orchestrator | 2025-05-04 01:07:41.241752 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-04 01:07:41.241767 | orchestrator | 2025-05-04 01:07:41.241780 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-04 01:07:41.241794 | orchestrator | Sunday 04 May 2025 01:03:30 +0000 (0:00:01.097) 0:00:02.832 ************ 2025-05-04 01:07:41.241809 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:07:41.241862 | orchestrator | 2025-05-04 01:07:41.241880 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-04 01:07:41.241894 | orchestrator | Sunday 04 May 2025 01:03:32 +0000 (0:00:01.842) 0:00:04.675 ************ 2025-05-04 01:07:41.241912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 01:07:41.242001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.242079 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.242133 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242180 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.242196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.242239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.242268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.242284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.242299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.242325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 01:07:41.242341 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.242370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.242401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.242543 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.242568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242583 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.242598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.242658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.242688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.242705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.242720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.242736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.242757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.242886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.242904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.242927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.242977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.242994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.243010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.243025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.243096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.243109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.243122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.243141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.243197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.243371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.243418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.243445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.243500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.243521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.243535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.243593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.243642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.243656 | orchestrator | 2025-05-04 01:07:41.243669 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-04 01:07:41.243682 | orchestrator | Sunday 04 May 2025 01:03:37 +0000 (0:00:04.344) 0:00:09.019 ************ 2025-05-04 01:07:41.243695 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:07:41.243708 | orchestrator | 2025-05-04 01:07:41.243721 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-04 01:07:41.243734 | orchestrator | Sunday 04 May 2025 01:03:38 +0000 (0:00:01.531) 0:00:10.551 ************ 2025-05-04 01:07:41.243747 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 01:07:41.243767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.243922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.243968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.243982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.243995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244029 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244143 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 01:07:41.244157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.244212 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.244336 | orchestrator | 2025-05-04 01:07:41.244354 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-04 01:07:41.244380 | orchestrator | Sunday 04 May 2025 01:03:45 +0000 (0:00:06.601) 0:00:17.153 ************ 2025-05-04 01:07:41.244411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.244434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244518 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.244532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.244554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.244640 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.244653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.244692 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244705 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.244718 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.244731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.244757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.244810 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.244885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.244910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244951 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.244964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.244978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.244991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245004 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.245017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245070 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.245082 | orchestrator | 2025-05-04 01:07:41.245095 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-04 01:07:41.245108 | orchestrator | Sunday 04 May 2025 01:03:48 +0000 (0:00:02.823) 0:00:19.977 ************ 2025-05-04 01:07:41.245121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245212 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.245576 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245594 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.245617 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245711 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.245722 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.245732 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.245742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.245852 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.245863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245932 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.245956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.245967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.245996 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.246006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-04 01:07:41.246071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.246096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.246108 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.246196 | orchestrator | 2025-05-04 01:07:41.246208 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-04 01:07:41.246222 | orchestrator | Sunday 04 May 2025 01:03:51 +0000 (0:00:03.399) 0:00:23.376 ************ 2025-05-04 01:07:41.246234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.246247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.246264 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 01:07:41.246291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.246305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.246318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.246330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.246387 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246399 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246411 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246524 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.246536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.246548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.246566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.246621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.246633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.246652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.246669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.246707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.246725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.246741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.246774 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 01:07:41.246790 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.246801 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.246855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.246868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.246884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.246896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.246955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.246966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.246977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.246988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.247074 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.247093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.247112 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.247134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.247157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.247169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.247185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.247196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.247207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.247218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.247253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.247268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.247279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.247290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.247324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.247351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.247373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.247394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.247428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.247444 | orchestrator | 2025-05-04 01:07:41.247455 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-04 01:07:41.247465 | orchestrator | Sunday 04 May 2025 01:03:57 +0000 (0:00:06.403) 0:00:29.780 ************ 2025-05-04 01:07:41.247476 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 01:07:41.247491 | orchestrator | 2025-05-04 01:07:41.247501 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-04 01:07:41.247512 | orchestrator | Sunday 04 May 2025 01:03:58 +0000 (0:00:00.519) 0:00:30.299 ************ 2025-05-04 01:07:41.247522 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.247533 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.247544 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.247555 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.247574 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.247589 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248169 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248194 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248212 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248224 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248235 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248258 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248269 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248319 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248333 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1088888, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.248345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248357 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248381 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248393 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248403 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248437 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248449 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248476 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248487 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248497 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248511 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248540 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248551 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248588 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248598 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248612 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248641 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248652 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248663 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248679 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1088904, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.248689 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248699 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248714 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248743 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248755 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248765 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248781 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248793 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248808 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248819 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248868 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248880 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248897 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248908 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248919 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248934 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248944 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248974 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.248986 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249003 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249014 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249025 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1088892, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249039 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249049 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.249059 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249089 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249108 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249119 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.249130 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249156 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.249165 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249174 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.249183 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249192 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.249201 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-04 01:07:41.249210 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.249238 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088901, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249255 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088955, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8863533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249265 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088921, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8813531, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249274 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088899, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249288 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088909, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.879353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249297 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088948, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.885353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249306 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088895, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8773532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249340 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1088928, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.882353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-04 01:07:41.249351 | orchestrator | 2025-05-04 01:07:41.249361 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-04 01:07:41.249370 | orchestrator | Sunday 04 May 2025 01:04:32 +0000 (0:00:34.361) 0:01:04.660 ************ 2025-05-04 01:07:41.249379 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 01:07:41.249387 | orchestrator | 2025-05-04 01:07:41.249396 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-04 01:07:41.249405 | orchestrator | Sunday 04 May 2025 01:04:33 +0000 (0:00:00.534) 0:01:05.195 ************ 2025-05-04 01:07:41.249414 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249423 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249432 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249449 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249458 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 01:07:41.249467 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249489 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249498 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249506 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249515 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:07:41.249524 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249541 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249558 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249567 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249584 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249602 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249610 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249628 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249645 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249654 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249671 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249688 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249697 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.249706 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249714 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-04 01:07:41.249723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-04 01:07:41.249732 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-04 01:07:41.249741 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-04 01:07:41.249749 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-04 01:07:41.249758 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-04 01:07:41.249767 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-04 01:07:41.249776 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-04 01:07:41.249785 | orchestrator | 2025-05-04 01:07:41.249794 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-04 01:07:41.249802 | orchestrator | Sunday 04 May 2025 01:04:34 +0000 (0:00:01.372) 0:01:06.568 ************ 2025-05-04 01:07:41.249811 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-04 01:07:41.249820 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.249843 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-04 01:07:41.249852 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.249860 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-04 01:07:41.249869 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.249899 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-04 01:07:41.249914 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.249924 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-04 01:07:41.249933 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.249941 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-04 01:07:41.249950 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.249959 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-04 01:07:41.249968 | orchestrator | 2025-05-04 01:07:41.249977 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-04 01:07:41.249989 | orchestrator | Sunday 04 May 2025 01:04:50 +0000 (0:00:16.108) 0:01:22.676 ************ 2025-05-04 01:07:41.249997 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-04 01:07:41.250006 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.250035 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-04 01:07:41.250045 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.250054 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-04 01:07:41.250063 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-04 01:07:41.250072 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250081 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.250089 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-04 01:07:41.250098 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250107 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-04 01:07:41.250116 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.250124 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-04 01:07:41.250133 | orchestrator | 2025-05-04 01:07:41.250142 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-04 01:07:41.250154 | orchestrator | Sunday 04 May 2025 01:04:56 +0000 (0:00:05.615) 0:01:28.292 ************ 2025-05-04 01:07:41.250163 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-04 01:07:41.250172 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.250181 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-04 01:07:41.250190 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.250199 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-04 01:07:41.250207 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.250216 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-04 01:07:41.250225 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250234 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-04 01:07:41.250242 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250251 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-04 01:07:41.250260 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.250269 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-04 01:07:41.250281 | orchestrator | 2025-05-04 01:07:41.250290 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-04 01:07:41.250299 | orchestrator | Sunday 04 May 2025 01:05:01 +0000 (0:00:04.767) 0:01:33.059 ************ 2025-05-04 01:07:41.250308 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 01:07:41.250316 | orchestrator | 2025-05-04 01:07:41.250325 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-04 01:07:41.250334 | orchestrator | Sunday 04 May 2025 01:05:01 +0000 (0:00:00.640) 0:01:33.700 ************ 2025-05-04 01:07:41.250342 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.250351 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.250360 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.250369 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.250377 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250386 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250394 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.250403 | orchestrator | 2025-05-04 01:07:41.250412 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-04 01:07:41.250421 | orchestrator | Sunday 04 May 2025 01:05:02 +0000 (0:00:00.912) 0:01:34.612 ************ 2025-05-04 01:07:41.250429 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.250438 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.250447 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250455 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250464 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:07:41.250472 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:07:41.250481 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:07:41.250490 | orchestrator | 2025-05-04 01:07:41.250502 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-04 01:07:41.250511 | orchestrator | Sunday 04 May 2025 01:05:06 +0000 (0:00:04.129) 0:01:38.742 ************ 2025-05-04 01:07:41.250520 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250529 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250537 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.250546 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.250555 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250564 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.250572 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250581 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250590 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250599 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250609 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250619 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.250628 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-04 01:07:41.250637 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.250646 | orchestrator | 2025-05-04 01:07:41.250655 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-04 01:07:41.250664 | orchestrator | Sunday 04 May 2025 01:05:09 +0000 (0:00:02.911) 0:01:41.653 ************ 2025-05-04 01:07:41.250673 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-04 01:07:41.250682 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.250690 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-04 01:07:41.250699 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.250708 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-04 01:07:41.250721 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.250730 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-04 01:07:41.250739 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250748 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-04 01:07:41.250756 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250765 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-04 01:07:41.250774 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.250783 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-04 01:07:41.250791 | orchestrator | 2025-05-04 01:07:41.250800 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-04 01:07:41.250809 | orchestrator | Sunday 04 May 2025 01:05:13 +0000 (0:00:04.074) 0:01:45.728 ************ 2025-05-04 01:07:41.250818 | orchestrator | [WARNING]: Skipped 2025-05-04 01:07:41.250865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-04 01:07:41.250874 | orchestrator | due to this access issue: 2025-05-04 01:07:41.250883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-04 01:07:41.250892 | orchestrator | not a directory 2025-05-04 01:07:41.250905 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-04 01:07:41.250913 | orchestrator | 2025-05-04 01:07:41.250922 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-04 01:07:41.250930 | orchestrator | Sunday 04 May 2025 01:05:15 +0000 (0:00:01.851) 0:01:47.580 ************ 2025-05-04 01:07:41.250939 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.250948 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.250957 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.250965 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.250974 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.250983 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.250992 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.251000 | orchestrator | 2025-05-04 01:07:41.251009 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-04 01:07:41.251018 | orchestrator | Sunday 04 May 2025 01:05:16 +0000 (0:00:00.863) 0:01:48.443 ************ 2025-05-04 01:07:41.251026 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.251035 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.251044 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.251052 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.251061 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.251069 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.251078 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.251087 | orchestrator | 2025-05-04 01:07:41.251095 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-04 01:07:41.251104 | orchestrator | Sunday 04 May 2025 01:05:17 +0000 (0:00:00.756) 0:01:49.200 ************ 2025-05-04 01:07:41.251113 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251121 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.251130 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251143 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251153 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.251162 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.251170 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251184 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.251192 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251201 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.251210 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251219 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.251228 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-04 01:07:41.251237 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.251246 | orchestrator | 2025-05-04 01:07:41.251255 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-04 01:07:41.251264 | orchestrator | Sunday 04 May 2025 01:05:20 +0000 (0:00:03.485) 0:01:52.685 ************ 2025-05-04 01:07:41.251272 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251281 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:07:41.251290 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251299 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:07:41.251307 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251316 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:07:41.251325 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251333 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:07:41.251342 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251351 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:07:41.251360 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251368 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:07:41.251377 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-04 01:07:41.251386 | orchestrator | skipping: [testbed-manager] 2025-05-04 01:07:41.251394 | orchestrator | 2025-05-04 01:07:41.251402 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-04 01:07:41.251410 | orchestrator | Sunday 04 May 2025 01:05:24 +0000 (0:00:03.531) 0:01:56.217 ************ 2025-05-04 01:07:41.251418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.251427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.251446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.251464 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-04 01:07:41.251473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.251481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.251490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-04 01:07:41.251508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251542 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251551 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-04 01:07:41.251663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251683 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.251695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.251704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.251712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.251726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.251735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.251743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.251759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.251775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.251783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.251792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251863 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-04 01:07:41.251877 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.251886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.251908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.251921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.251936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.251954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.251966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.251974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.251983 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.252001 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.252010 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.252027 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.252036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.252071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.252092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.252100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.252121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-04 01:07:41.252130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.252141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.252150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.252159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-04 01:07:41.252177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-04 01:07:41.252186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.252198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.252215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.252236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-04 01:07:41.252244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.252279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-04 01:07:41.252288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-04 01:07:41.252309 | orchestrator | 2025-05-04 01:07:41.252317 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-04 01:07:41.252325 | orchestrator | Sunday 04 May 2025 01:05:29 +0000 (0:00:05.188) 0:02:01.405 ************ 2025-05-04 01:07:41.252334 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-04 01:07:41.252342 | orchestrator | 2025-05-04 01:07:41.252350 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252358 | orchestrator | Sunday 04 May 2025 01:05:32 +0000 (0:00:03.025) 0:02:04.430 ************ 2025-05-04 01:07:41.252366 | orchestrator | 2025-05-04 01:07:41.252374 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252382 | orchestrator | Sunday 04 May 2025 01:05:32 +0000 (0:00:00.058) 0:02:04.488 ************ 2025-05-04 01:07:41.252390 | orchestrator | 2025-05-04 01:07:41.252398 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252406 | orchestrator | Sunday 04 May 2025 01:05:32 +0000 (0:00:00.293) 0:02:04.782 ************ 2025-05-04 01:07:41.252414 | orchestrator | 2025-05-04 01:07:41.252422 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252431 | orchestrator | Sunday 04 May 2025 01:05:32 +0000 (0:00:00.057) 0:02:04.839 ************ 2025-05-04 01:07:41.252439 | orchestrator | 2025-05-04 01:07:41.252447 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252455 | orchestrator | Sunday 04 May 2025 01:05:32 +0000 (0:00:00.054) 0:02:04.894 ************ 2025-05-04 01:07:41.252463 | orchestrator | 2025-05-04 01:07:41.252471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252479 | orchestrator | Sunday 04 May 2025 01:05:33 +0000 (0:00:00.054) 0:02:04.949 ************ 2025-05-04 01:07:41.252487 | orchestrator | 2025-05-04 01:07:41.252495 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-04 01:07:41.252503 | orchestrator | Sunday 04 May 2025 01:05:33 +0000 (0:00:00.386) 0:02:05.336 ************ 2025-05-04 01:07:41.252511 | orchestrator | 2025-05-04 01:07:41.252519 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-04 01:07:41.252528 | orchestrator | Sunday 04 May 2025 01:05:33 +0000 (0:00:00.160) 0:02:05.496 ************ 2025-05-04 01:07:41.252536 | orchestrator | changed: [testbed-manager] 2025-05-04 01:07:41.252544 | orchestrator | 2025-05-04 01:07:41.252552 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-04 01:07:41.252560 | orchestrator | Sunday 04 May 2025 01:05:52 +0000 (0:00:18.485) 0:02:23.982 ************ 2025-05-04 01:07:41.252568 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:07:41.252576 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:07:41.252585 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:07:41.252593 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:07:41.252601 | orchestrator | changed: [testbed-manager] 2025-05-04 01:07:41.252609 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:07:41.252617 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:07:41.252625 | orchestrator | 2025-05-04 01:07:41.252633 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-04 01:07:41.252641 | orchestrator | Sunday 04 May 2025 01:06:14 +0000 (0:00:22.291) 0:02:46.273 ************ 2025-05-04 01:07:41.252650 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:07:41.252658 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:07:41.252666 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:07:41.252674 | orchestrator | 2025-05-04 01:07:41.252682 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-04 01:07:41.252690 | orchestrator | Sunday 04 May 2025 01:06:26 +0000 (0:00:12.374) 0:02:58.648 ************ 2025-05-04 01:07:41.252698 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:07:41.252709 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:07:41.252718 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:07:41.252726 | orchestrator | 2025-05-04 01:07:41.252737 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-04 01:07:41.252745 | orchestrator | Sunday 04 May 2025 01:06:39 +0000 (0:00:12.380) 0:03:11.028 ************ 2025-05-04 01:07:41.252754 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:07:41.252762 | orchestrator | changed: [testbed-manager] 2025-05-04 01:07:41.252772 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:07:41.252781 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:07:41.252789 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:07:41.252797 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:07:41.252806 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:07:41.252814 | orchestrator | 2025-05-04 01:07:41.252835 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-04 01:07:41.252844 | orchestrator | Sunday 04 May 2025 01:06:58 +0000 (0:00:19.389) 0:03:30.418 ************ 2025-05-04 01:07:41.252853 | orchestrator | changed: [testbed-manager] 2025-05-04 01:07:41.252861 | orchestrator | 2025-05-04 01:07:41.252869 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-04 01:07:41.252877 | orchestrator | Sunday 04 May 2025 01:07:08 +0000 (0:00:09.841) 0:03:40.260 ************ 2025-05-04 01:07:41.252885 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:07:41.252893 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:07:41.252902 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:07:41.252910 | orchestrator | 2025-05-04 01:07:41.252918 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-04 01:07:41.252926 | orchestrator | Sunday 04 May 2025 01:07:16 +0000 (0:00:08.536) 0:03:48.796 ************ 2025-05-04 01:07:41.252934 | orchestrator | changed: [testbed-manager] 2025-05-04 01:07:41.252942 | orchestrator | 2025-05-04 01:07:41.252950 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-04 01:07:41.252959 | orchestrator | Sunday 04 May 2025 01:07:25 +0000 (0:00:08.757) 0:03:57.554 ************ 2025-05-04 01:07:41.252967 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:07:41.252975 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:07:41.252983 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:07:41.252991 | orchestrator | 2025-05-04 01:07:41.252999 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:07:41.253008 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-04 01:07:41.253016 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-04 01:07:41.253024 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-04 01:07:41.253032 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-04 01:07:41.253041 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-04 01:07:41.253049 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-04 01:07:41.253057 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-04 01:07:41.253065 | orchestrator | 2025-05-04 01:07:41.253073 | orchestrator | 2025-05-04 01:07:41.253081 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:07:41.253089 | orchestrator | Sunday 04 May 2025 01:07:40 +0000 (0:00:14.467) 0:04:12.022 ************ 2025-05-04 01:07:41.253097 | orchestrator | =============================================================================== 2025-05-04 01:07:41.253108 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 34.36s 2025-05-04 01:07:41.253120 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 22.29s 2025-05-04 01:07:41.253129 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.39s 2025-05-04 01:07:41.253137 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.49s 2025-05-04 01:07:41.253145 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.11s 2025-05-04 01:07:41.253153 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 14.47s 2025-05-04 01:07:41.253161 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.38s 2025-05-04 01:07:41.253170 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.37s 2025-05-04 01:07:41.253178 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.84s 2025-05-04 01:07:41.253186 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.76s 2025-05-04 01:07:41.253194 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 8.54s 2025-05-04 01:07:41.253202 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.60s 2025-05-04 01:07:41.253210 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.40s 2025-05-04 01:07:41.253218 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.62s 2025-05-04 01:07:41.253226 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.19s 2025-05-04 01:07:41.253234 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.77s 2025-05-04 01:07:41.253242 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.34s 2025-05-04 01:07:41.253250 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.13s 2025-05-04 01:07:41.253261 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.07s 2025-05-04 01:07:44.275786 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 3.53s 2025-05-04 01:07:44.276144 | orchestrator | 2025-05-04 01:07:41 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:44.276180 | orchestrator | 2025-05-04 01:07:41 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:44.276204 | orchestrator | 2025-05-04 01:07:41 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:44.276229 | orchestrator | 2025-05-04 01:07:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:44.276254 | orchestrator | 2025-05-04 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:44.276420 | orchestrator | 2025-05-04 01:07:44 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:44.276453 | orchestrator | 2025-05-04 01:07:44 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:07:44.276480 | orchestrator | 2025-05-04 01:07:44 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:44.276512 | orchestrator | 2025-05-04 01:07:44 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:44.277044 | orchestrator | 2025-05-04 01:07:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:47.309969 | orchestrator | 2025-05-04 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:47.310147 | orchestrator | 2025-05-04 01:07:47 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:47.310956 | orchestrator | 2025-05-04 01:07:47 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:07:47.310989 | orchestrator | 2025-05-04 01:07:47 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:47.311683 | orchestrator | 2025-05-04 01:07:47 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:47.312130 | orchestrator | 2025-05-04 01:07:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:50.361022 | orchestrator | 2025-05-04 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:50.361145 | orchestrator | 2025-05-04 01:07:50 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:50.361282 | orchestrator | 2025-05-04 01:07:50 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:07:50.366929 | orchestrator | 2025-05-04 01:07:50 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:50.367657 | orchestrator | 2025-05-04 01:07:50 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:50.367691 | orchestrator | 2025-05-04 01:07:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:53.416386 | orchestrator | 2025-05-04 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:53.416517 | orchestrator | 2025-05-04 01:07:53 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:53.417565 | orchestrator | 2025-05-04 01:07:53 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:07:53.418734 | orchestrator | 2025-05-04 01:07:53 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:53.419691 | orchestrator | 2025-05-04 01:07:53 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:53.420811 | orchestrator | 2025-05-04 01:07:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:53.421299 | orchestrator | 2025-05-04 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:56.471333 | orchestrator | 2025-05-04 01:07:56 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:56.471568 | orchestrator | 2025-05-04 01:07:56 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:07:56.472185 | orchestrator | 2025-05-04 01:07:56 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:56.472985 | orchestrator | 2025-05-04 01:07:56 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:56.473657 | orchestrator | 2025-05-04 01:07:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:07:59.512505 | orchestrator | 2025-05-04 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:07:59.512660 | orchestrator | 2025-05-04 01:07:59 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:07:59.514917 | orchestrator | 2025-05-04 01:07:59 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:07:59.514966 | orchestrator | 2025-05-04 01:07:59 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:07:59.515001 | orchestrator | 2025-05-04 01:07:59 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:07:59.516170 | orchestrator | 2025-05-04 01:07:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:02.567961 | orchestrator | 2025-05-04 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:02.568121 | orchestrator | 2025-05-04 01:08:02 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:02.569523 | orchestrator | 2025-05-04 01:08:02 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:02.571599 | orchestrator | 2025-05-04 01:08:02 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:08:02.573213 | orchestrator | 2025-05-04 01:08:02 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:08:02.574889 | orchestrator | 2025-05-04 01:08:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:05.636398 | orchestrator | 2025-05-04 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:05.636551 | orchestrator | 2025-05-04 01:08:05 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:05.640789 | orchestrator | 2025-05-04 01:08:05 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:05.641269 | orchestrator | 2025-05-04 01:08:05 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:08:05.643279 | orchestrator | 2025-05-04 01:08:05 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:08:05.645618 | orchestrator | 2025-05-04 01:08:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:08.690242 | orchestrator | 2025-05-04 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:08.690387 | orchestrator | 2025-05-04 01:08:08 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:08.692041 | orchestrator | 2025-05-04 01:08:08 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:08.693330 | orchestrator | 2025-05-04 01:08:08 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:08:08.694494 | orchestrator | 2025-05-04 01:08:08 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:08:08.695484 | orchestrator | 2025-05-04 01:08:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:11.738238 | orchestrator | 2025-05-04 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:11.738415 | orchestrator | 2025-05-04 01:08:11 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:11.741434 | orchestrator | 2025-05-04 01:08:11 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:11.744137 | orchestrator | 2025-05-04 01:08:11 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:08:11.745475 | orchestrator | 2025-05-04 01:08:11 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state STARTED 2025-05-04 01:08:11.747634 | orchestrator | 2025-05-04 01:08:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:14.810483 | orchestrator | 2025-05-04 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:14.810648 | orchestrator | 2025-05-04 01:08:14 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:14.815450 | orchestrator | 2025-05-04 01:08:14 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:14.817103 | orchestrator | 2025-05-04 01:08:14 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:14.818953 | orchestrator | 2025-05-04 01:08:14 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state STARTED 2025-05-04 01:08:14.821999 | orchestrator | 2025-05-04 01:08:14 | INFO  | Task 18f605dc-0d26-4f5c-bd71-0efacd63a1a6 is in state SUCCESS 2025-05-04 01:08:14.824609 | orchestrator | 2025-05-04 01:08:14.824655 | orchestrator | 2025-05-04 01:08:14.824671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:08:14.824686 | orchestrator | 2025-05-04 01:08:14.824701 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:08:14.825147 | orchestrator | Sunday 04 May 2025 01:04:45 +0000 (0:00:00.334) 0:00:00.334 ************ 2025-05-04 01:08:14.825170 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:08:14.825188 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:08:14.825202 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:08:14.825217 | orchestrator | 2025-05-04 01:08:14.825231 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:08:14.825247 | orchestrator | Sunday 04 May 2025 01:04:45 +0000 (0:00:00.447) 0:00:00.782 ************ 2025-05-04 01:08:14.825261 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-04 01:08:14.825276 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-04 01:08:14.825290 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-04 01:08:14.825304 | orchestrator | 2025-05-04 01:08:14.825318 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-04 01:08:14.825333 | orchestrator | 2025-05-04 01:08:14.825347 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-04 01:08:14.825361 | orchestrator | Sunday 04 May 2025 01:04:45 +0000 (0:00:00.326) 0:00:01.109 ************ 2025-05-04 01:08:14.825376 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:08:14.825392 | orchestrator | 2025-05-04 01:08:14.825406 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-04 01:08:14.825421 | orchestrator | Sunday 04 May 2025 01:04:46 +0000 (0:00:00.798) 0:00:01.907 ************ 2025-05-04 01:08:14.825435 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-04 01:08:14.825449 | orchestrator | 2025-05-04 01:08:14.825463 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-04 01:08:14.825477 | orchestrator | Sunday 04 May 2025 01:04:50 +0000 (0:00:03.502) 0:00:05.409 ************ 2025-05-04 01:08:14.825515 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-04 01:08:14.825532 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-04 01:08:14.825547 | orchestrator | 2025-05-04 01:08:14.825581 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-04 01:08:14.825596 | orchestrator | Sunday 04 May 2025 01:04:56 +0000 (0:00:06.467) 0:00:11.877 ************ 2025-05-04 01:08:14.825610 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:08:14.825625 | orchestrator | 2025-05-04 01:08:14.825640 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-04 01:08:14.825654 | orchestrator | Sunday 04 May 2025 01:05:00 +0000 (0:00:03.380) 0:00:15.258 ************ 2025-05-04 01:08:14.825668 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:08:14.825682 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-04 01:08:14.825696 | orchestrator | 2025-05-04 01:08:14.825711 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-04 01:08:14.825725 | orchestrator | Sunday 04 May 2025 01:05:03 +0000 (0:00:03.813) 0:00:19.071 ************ 2025-05-04 01:08:14.825742 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:08:14.825758 | orchestrator | 2025-05-04 01:08:14.825774 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-04 01:08:14.825790 | orchestrator | Sunday 04 May 2025 01:05:07 +0000 (0:00:03.168) 0:00:22.240 ************ 2025-05-04 01:08:14.825832 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-04 01:08:14.825850 | orchestrator | 2025-05-04 01:08:14.825866 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-04 01:08:14.825897 | orchestrator | Sunday 04 May 2025 01:05:12 +0000 (0:00:05.056) 0:00:27.296 ************ 2025-05-04 01:08:14.825973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.826007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.826098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.826150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.826170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.826241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.826259 | orchestrator | 2025-05-04 01:08:14.826274 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-04 01:08:14.826289 | orchestrator | Sunday 04 May 2025 01:05:16 +0000 (0:00:03.949) 0:00:31.246 ************ 2025-05-04 01:08:14.826303 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:08:14.826318 | orchestrator | 2025-05-04 01:08:14.826333 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-04 01:08:14.826347 | orchestrator | Sunday 04 May 2025 01:05:16 +0000 (0:00:00.479) 0:00:31.725 ************ 2025-05-04 01:08:14.826362 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.826376 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:08:14.826390 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:08:14.826404 | orchestrator | 2025-05-04 01:08:14.826419 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-04 01:08:14.826433 | orchestrator | Sunday 04 May 2025 01:05:26 +0000 (0:00:09.812) 0:00:41.538 ************ 2025-05-04 01:08:14.826454 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:14.826469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:14.826484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:14.826498 | orchestrator | 2025-05-04 01:08:14.826512 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-04 01:08:14.826526 | orchestrator | Sunday 04 May 2025 01:05:28 +0000 (0:00:01.687) 0:00:43.226 ************ 2025-05-04 01:08:14.826540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:14.826554 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:14.826569 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:14.826583 | orchestrator | 2025-05-04 01:08:14.826597 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-04 01:08:14.826611 | orchestrator | Sunday 04 May 2025 01:05:29 +0000 (0:00:01.176) 0:00:44.402 ************ 2025-05-04 01:08:14.826626 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:08:14.826640 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:08:14.826655 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:08:14.826669 | orchestrator | 2025-05-04 01:08:14.826684 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-04 01:08:14.826698 | orchestrator | Sunday 04 May 2025 01:05:29 +0000 (0:00:00.603) 0:00:45.006 ************ 2025-05-04 01:08:14.826712 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.826726 | orchestrator | 2025-05-04 01:08:14.826740 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-04 01:08:14.826754 | orchestrator | Sunday 04 May 2025 01:05:30 +0000 (0:00:00.200) 0:00:45.207 ************ 2025-05-04 01:08:14.826771 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.826794 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.826841 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.826866 | orchestrator | 2025-05-04 01:08:14.826890 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-04 01:08:14.826913 | orchestrator | Sunday 04 May 2025 01:05:30 +0000 (0:00:00.258) 0:00:45.466 ************ 2025-05-04 01:08:14.826935 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:08:14.826959 | orchestrator | 2025-05-04 01:08:14.826983 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-04 01:08:14.827007 | orchestrator | Sunday 04 May 2025 01:05:31 +0000 (0:00:00.793) 0:00:46.260 ************ 2025-05-04 01:08:14.827065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.827098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.827147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.827184 | orchestrator | 2025-05-04 01:08:14.827200 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-04 01:08:14.827227 | orchestrator | Sunday 04 May 2025 01:05:36 +0000 (0:00:05.332) 0:00:51.593 ************ 2025-05-04 01:08:14.827242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 01:08:14.827258 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.827281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 01:08:14.827308 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.827323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 01:08:14.827344 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.827359 | orchestrator | 2025-05-04 01:08:14.827373 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-04 01:08:14.827387 | orchestrator | Sunday 04 May 2025 01:05:40 +0000 (0:00:04.563) 0:00:56.156 ************ 2025-05-04 01:08:14.827416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 01:08:14.827442 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.827457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 01:08:14.827479 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.827494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-04 01:08:14.827509 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.827523 | orchestrator | 2025-05-04 01:08:14.827537 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-04 01:08:14.827551 | orchestrator | Sunday 04 May 2025 01:05:45 +0000 (0:00:04.217) 0:01:00.374 ************ 2025-05-04 01:08:14.827565 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.827579 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.827593 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.827607 | orchestrator | 2025-05-04 01:08:14.827627 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-04 01:08:14.827643 | orchestrator | Sunday 04 May 2025 01:05:49 +0000 (0:00:04.039) 0:01:04.413 ************ 2025-05-04 01:08:14.827657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.827874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.827905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.827942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.827968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.828002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.828018 | orchestrator | 2025-05-04 01:08:14.828033 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-04 01:08:14.828047 | orchestrator | Sunday 04 May 2025 01:05:58 +0000 (0:00:09.068) 0:01:13.482 ************ 2025-05-04 01:08:14.828061 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:08:14.828075 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.828090 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:08:14.828104 | orchestrator | 2025-05-04 01:08:14.828118 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-04 01:08:14.828132 | orchestrator | Sunday 04 May 2025 01:06:15 +0000 (0:00:17.117) 0:01:30.600 ************ 2025-05-04 01:08:14.828147 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.828161 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.828175 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.828189 | orchestrator | 2025-05-04 01:08:14.828203 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-04 01:08:14.828217 | orchestrator | Sunday 04 May 2025 01:06:29 +0000 (0:00:13.743) 0:01:44.344 ************ 2025-05-04 01:08:14.828231 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.828245 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.828259 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.828273 | orchestrator | 2025-05-04 01:08:14.828287 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-04 01:08:14.828308 | orchestrator | Sunday 04 May 2025 01:06:36 +0000 (0:00:07.092) 0:01:51.436 ************ 2025-05-04 01:08:14.828322 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.828336 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.828350 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.828370 | orchestrator | 2025-05-04 01:08:14.828384 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-04 01:08:14.828400 | orchestrator | Sunday 04 May 2025 01:06:45 +0000 (0:00:08.762) 0:02:00.199 ************ 2025-05-04 01:08:14.828415 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.828437 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.828454 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.828470 | orchestrator | 2025-05-04 01:08:14.828486 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-04 01:08:14.828504 | orchestrator | Sunday 04 May 2025 01:06:52 +0000 (0:00:07.754) 0:02:07.954 ************ 2025-05-04 01:08:14.828520 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.828536 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.828552 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.828569 | orchestrator | 2025-05-04 01:08:14.828585 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-04 01:08:14.828600 | orchestrator | Sunday 04 May 2025 01:06:53 +0000 (0:00:00.368) 0:02:08.322 ************ 2025-05-04 01:08:14.828616 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-04 01:08:14.828632 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.828648 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-04 01:08:14.828665 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-04 01:08:14.828681 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.828697 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.828713 | orchestrator | 2025-05-04 01:08:14.828729 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-04 01:08:14.828746 | orchestrator | Sunday 04 May 2025 01:06:56 +0000 (0:00:03.447) 0:02:11.770 ************ 2025-05-04 01:08:14.828763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.828827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.828848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.828882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.828918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-04 01:08:14.828941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-04 01:08:14.828982 | orchestrator | 2025-05-04 01:08:14.829006 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-04 01:08:14.829020 | orchestrator | Sunday 04 May 2025 01:07:00 +0000 (0:00:03.745) 0:02:15.515 ************ 2025-05-04 01:08:14.829035 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:14.829049 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:14.829063 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:14.829077 | orchestrator | 2025-05-04 01:08:14.829098 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-04 01:08:14.829113 | orchestrator | Sunday 04 May 2025 01:07:00 +0000 (0:00:00.402) 0:02:15.918 ************ 2025-05-04 01:08:14.829127 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.829141 | orchestrator | 2025-05-04 01:08:14.829156 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-04 01:08:14.829170 | orchestrator | Sunday 04 May 2025 01:07:02 +0000 (0:00:02.141) 0:02:18.059 ************ 2025-05-04 01:08:14.829184 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.829198 | orchestrator | 2025-05-04 01:08:14.829212 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-04 01:08:14.829226 | orchestrator | Sunday 04 May 2025 01:07:05 +0000 (0:00:02.233) 0:02:20.293 ************ 2025-05-04 01:08:14.829241 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.829255 | orchestrator | 2025-05-04 01:08:14.829269 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-04 01:08:14.829283 | orchestrator | Sunday 04 May 2025 01:07:07 +0000 (0:00:02.040) 0:02:22.333 ************ 2025-05-04 01:08:14.829297 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.829311 | orchestrator | 2025-05-04 01:08:14.829325 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-04 01:08:14.829340 | orchestrator | Sunday 04 May 2025 01:07:34 +0000 (0:00:27.233) 0:02:49.567 ************ 2025-05-04 01:08:14.829354 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.829368 | orchestrator | 2025-05-04 01:08:14.829389 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-04 01:08:14.829404 | orchestrator | Sunday 04 May 2025 01:07:36 +0000 (0:00:02.172) 0:02:51.739 ************ 2025-05-04 01:08:14.829418 | orchestrator | 2025-05-04 01:08:14.829432 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-04 01:08:14.829446 | orchestrator | Sunday 04 May 2025 01:07:36 +0000 (0:00:00.070) 0:02:51.810 ************ 2025-05-04 01:08:14.829460 | orchestrator | 2025-05-04 01:08:14.829474 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-04 01:08:14.829489 | orchestrator | Sunday 04 May 2025 01:07:36 +0000 (0:00:00.055) 0:02:51.866 ************ 2025-05-04 01:08:14.829503 | orchestrator | 2025-05-04 01:08:14.829517 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-04 01:08:14.829531 | orchestrator | Sunday 04 May 2025 01:07:36 +0000 (0:00:00.205) 0:02:52.071 ************ 2025-05-04 01:08:14.829553 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:08:14.829568 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:14.829582 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:08:14.829596 | orchestrator | 2025-05-04 01:08:14.829611 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:08:14.829626 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-04 01:08:14.829643 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-04 01:08:14.829658 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-04 01:08:14.829672 | orchestrator | 2025-05-04 01:08:14.829686 | orchestrator | 2025-05-04 01:08:14.829700 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:08:14.829714 | orchestrator | Sunday 04 May 2025 01:08:11 +0000 (0:00:34.704) 0:03:26.776 ************ 2025-05-04 01:08:14.829729 | orchestrator | =============================================================================== 2025-05-04 01:08:14.829743 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.70s 2025-05-04 01:08:14.829757 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.23s 2025-05-04 01:08:14.829771 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 17.12s 2025-05-04 01:08:14.829786 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 13.74s 2025-05-04 01:08:14.829800 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 9.81s 2025-05-04 01:08:14.829858 | orchestrator | glance : Copying over config.json files for services -------------------- 9.07s 2025-05-04 01:08:14.829883 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 8.76s 2025-05-04 01:08:14.829905 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.75s 2025-05-04 01:08:14.829920 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.09s 2025-05-04 01:08:14.829934 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.47s 2025-05-04 01:08:14.829949 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.33s 2025-05-04 01:08:14.829963 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.06s 2025-05-04 01:08:14.829977 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.56s 2025-05-04 01:08:14.829992 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.22s 2025-05-04 01:08:14.830006 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.04s 2025-05-04 01:08:14.830055 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.95s 2025-05-04 01:08:14.830072 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.81s 2025-05-04 01:08:14.830087 | orchestrator | glance : Check glance containers ---------------------------------------- 3.75s 2025-05-04 01:08:14.830101 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.50s 2025-05-04 01:08:14.830129 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.45s 2025-05-04 01:08:17.878190 | orchestrator | 2025-05-04 01:08:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:17.878330 | orchestrator | 2025-05-04 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:17.878368 | orchestrator | 2025-05-04 01:08:17 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:17.879606 | orchestrator | 2025-05-04 01:08:17 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:17.881053 | orchestrator | 2025-05-04 01:08:17 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:17.882242 | orchestrator | 2025-05-04 01:08:17 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:17.887204 | orchestrator | 2025-05-04 01:08:17.887263 | orchestrator | 2025-05-04 01:08:17.887280 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:08:17.887295 | orchestrator | 2025-05-04 01:08:17.887310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:08:17.887324 | orchestrator | Sunday 04 May 2025 01:04:56 +0000 (0:00:00.332) 0:00:00.332 ************ 2025-05-04 01:08:17.887339 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:08:17.887355 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:08:17.887370 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:08:17.887383 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:08:17.887398 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:08:17.887411 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:08:17.887426 | orchestrator | 2025-05-04 01:08:17.887440 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:08:17.887455 | orchestrator | Sunday 04 May 2025 01:04:57 +0000 (0:00:01.332) 0:00:01.665 ************ 2025-05-04 01:08:17.887469 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-04 01:08:17.887483 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-04 01:08:17.887498 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-04 01:08:17.887512 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-04 01:08:17.888366 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-04 01:08:17.888471 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-04 01:08:17.888489 | orchestrator | 2025-05-04 01:08:17.888928 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-04 01:08:17.888954 | orchestrator | 2025-05-04 01:08:17.888969 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-04 01:08:17.888984 | orchestrator | Sunday 04 May 2025 01:04:59 +0000 (0:00:01.823) 0:00:03.489 ************ 2025-05-04 01:08:17.888999 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:08:17.889015 | orchestrator | 2025-05-04 01:08:17.889030 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-04 01:08:17.889044 | orchestrator | Sunday 04 May 2025 01:05:01 +0000 (0:00:01.455) 0:00:04.945 ************ 2025-05-04 01:08:17.889059 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-04 01:08:17.889073 | orchestrator | 2025-05-04 01:08:17.889087 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-04 01:08:17.889103 | orchestrator | Sunday 04 May 2025 01:05:04 +0000 (0:00:03.356) 0:00:08.301 ************ 2025-05-04 01:08:17.889127 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-04 01:08:17.889152 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-04 01:08:17.889175 | orchestrator | 2025-05-04 01:08:17.889197 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-04 01:08:17.889220 | orchestrator | Sunday 04 May 2025 01:05:11 +0000 (0:00:06.870) 0:00:15.171 ************ 2025-05-04 01:08:17.889242 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:08:17.889264 | orchestrator | 2025-05-04 01:08:17.889288 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-04 01:08:17.889313 | orchestrator | Sunday 04 May 2025 01:05:14 +0000 (0:00:03.438) 0:00:18.610 ************ 2025-05-04 01:08:17.889329 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:08:17.889343 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-04 01:08:17.889380 | orchestrator | 2025-05-04 01:08:17.889396 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-04 01:08:17.889410 | orchestrator | Sunday 04 May 2025 01:05:18 +0000 (0:00:04.057) 0:00:22.667 ************ 2025-05-04 01:08:17.889424 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:08:17.889439 | orchestrator | 2025-05-04 01:08:17.889453 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-04 01:08:17.889467 | orchestrator | Sunday 04 May 2025 01:05:22 +0000 (0:00:03.413) 0:00:26.081 ************ 2025-05-04 01:08:17.889482 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-04 01:08:17.889496 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-04 01:08:17.889510 | orchestrator | 2025-05-04 01:08:17.889524 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-04 01:08:17.889539 | orchestrator | Sunday 04 May 2025 01:05:30 +0000 (0:00:08.323) 0:00:34.404 ************ 2025-05-04 01:08:17.889614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.889639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.889657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.889676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.889702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.889718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.889767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.889785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.889800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.889909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.889978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.890156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.890212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.890268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.890360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.890374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.890420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.890434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.890504 | orchestrator | 2025-05-04 01:08:17.890517 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-04 01:08:17.890537 | orchestrator | Sunday 04 May 2025 01:05:33 +0000 (0:00:02.695) 0:00:37.100 ************ 2025-05-04 01:08:17.890550 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.890564 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.890577 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.890590 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:08:17.890602 | orchestrator | 2025-05-04 01:08:17.890615 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-04 01:08:17.890628 | orchestrator | Sunday 04 May 2025 01:05:35 +0000 (0:00:01.985) 0:00:39.085 ************ 2025-05-04 01:08:17.890640 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-04 01:08:17.890653 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-04 01:08:17.890665 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-04 01:08:17.890678 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-04 01:08:17.890690 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-04 01:08:17.890703 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-04 01:08:17.890715 | orchestrator | 2025-05-04 01:08:17.890728 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-04 01:08:17.890740 | orchestrator | Sunday 04 May 2025 01:05:38 +0000 (0:00:02.710) 0:00:41.796 ************ 2025-05-04 01:08:17.890754 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-04 01:08:17.890769 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-04 01:08:17.890855 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-04 01:08:17.890884 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-04 01:08:17.890898 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-04 01:08:17.890911 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-04 01:08:17.890924 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-04 01:08:17.890974 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-04 01:08:17.891007 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-04 01:08:17.891022 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-04 01:08:17.891036 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-04 01:08:17.891078 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-04 01:08:17.891093 | orchestrator | 2025-05-04 01:08:17.891106 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-04 01:08:17.891119 | orchestrator | Sunday 04 May 2025 01:05:42 +0000 (0:00:04.323) 0:00:46.120 ************ 2025-05-04 01:08:17.891132 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:17.891145 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:17.891173 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-04 01:08:17.891186 | orchestrator | 2025-05-04 01:08:17.891199 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-04 01:08:17.891217 | orchestrator | Sunday 04 May 2025 01:05:44 +0000 (0:00:02.061) 0:00:48.181 ************ 2025-05-04 01:08:17.891230 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-04 01:08:17.891243 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-04 01:08:17.891266 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-04 01:08:17.891288 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-04 01:08:17.891311 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-04 01:08:17.891333 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-04 01:08:17.891354 | orchestrator | 2025-05-04 01:08:17.891378 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-04 01:08:17.891399 | orchestrator | Sunday 04 May 2025 01:05:47 +0000 (0:00:03.022) 0:00:51.204 ************ 2025-05-04 01:08:17.891424 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-04 01:08:17.891447 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-04 01:08:17.891466 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-04 01:08:17.891479 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-04 01:08:17.891491 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-04 01:08:17.891504 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-04 01:08:17.891517 | orchestrator | 2025-05-04 01:08:17.891529 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-04 01:08:17.891542 | orchestrator | Sunday 04 May 2025 01:05:48 +0000 (0:00:01.245) 0:00:52.449 ************ 2025-05-04 01:08:17.891554 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.891572 | orchestrator | 2025-05-04 01:08:17.891588 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-04 01:08:17.891601 | orchestrator | Sunday 04 May 2025 01:05:48 +0000 (0:00:00.118) 0:00:52.567 ************ 2025-05-04 01:08:17.891614 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.891626 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.891639 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.891651 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.891664 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.891677 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.891689 | orchestrator | 2025-05-04 01:08:17.891702 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-04 01:08:17.891714 | orchestrator | Sunday 04 May 2025 01:05:49 +0000 (0:00:00.753) 0:00:53.321 ************ 2025-05-04 01:08:17.891728 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:08:17.891742 | orchestrator | 2025-05-04 01:08:17.891755 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-04 01:08:17.892037 | orchestrator | Sunday 04 May 2025 01:05:52 +0000 (0:00:02.766) 0:00:56.088 ************ 2025-05-04 01:08:17.892055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.892133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.892150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.892178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.892330 | orchestrator | 2025-05-04 01:08:17.892342 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-04 01:08:17.892355 | orchestrator | Sunday 04 May 2025 01:05:57 +0000 (0:00:05.529) 0:01:01.617 ************ 2025-05-04 01:08:17.892397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.892412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.892439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892459 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.892472 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.892486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.892528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892560 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.892574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892600 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.892614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892647 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.892695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892724 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.892737 | orchestrator | 2025-05-04 01:08:17.892750 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-04 01:08:17.892763 | orchestrator | Sunday 04 May 2025 01:06:00 +0000 (0:00:02.853) 0:01:04.471 ************ 2025-05-04 01:08:17.892776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.892789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892837 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.892852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.892897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892913 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.892926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.892940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892953 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.892966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.892990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893045 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.893059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893071 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.893085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893118 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.893131 | orchestrator | 2025-05-04 01:08:17.893144 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-04 01:08:17.893156 | orchestrator | Sunday 04 May 2025 01:06:04 +0000 (0:00:04.262) 0:01:08.733 ************ 2025-05-04 01:08:17.893169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.893211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.893240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.893260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.893286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.893363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.893465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.893639 | orchestrator | 2025-05-04 01:08:17.893652 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-04 01:08:17.893670 | orchestrator | Sunday 04 May 2025 01:06:09 +0000 (0:00:04.171) 0:01:12.905 ************ 2025-05-04 01:08:17.893684 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-04 01:08:17.893697 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.893709 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-04 01:08:17.893722 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.893735 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-04 01:08:17.893748 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.893766 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-04 01:08:17.893779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-04 01:08:17.893792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-04 01:08:17.893805 | orchestrator | 2025-05-04 01:08:17.893865 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-04 01:08:17.893879 | orchestrator | Sunday 04 May 2025 01:06:12 +0000 (0:00:03.601) 0:01:16.506 ************ 2025-05-04 01:08:17.893892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.893906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.893941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.893980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.893993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.894006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.894087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.894127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.894310 | orchestrator | 2025-05-04 01:08:17.894324 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-04 01:08:17.894335 | orchestrator | Sunday 04 May 2025 01:06:25 +0000 (0:00:12.355) 0:01:28.862 ************ 2025-05-04 01:08:17.894345 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.894356 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.894366 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.894376 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:08:17.894387 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:08:17.894397 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:08:17.894407 | orchestrator | 2025-05-04 01:08:17.894417 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-04 01:08:17.894428 | orchestrator | Sunday 04 May 2025 01:06:29 +0000 (0:00:04.321) 0:01:33.183 ************ 2025-05-04 01:08:17.894438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894590 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.894601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894654 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.894664 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.894675 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.894685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894733 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.894748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894796 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.894828 | orchestrator | 2025-05-04 01:08:17.894839 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-04 01:08:17.894849 | orchestrator | Sunday 04 May 2025 01:06:31 +0000 (0:00:02.244) 0:01:35.427 ************ 2025-05-04 01:08:17.894860 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.894870 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.894880 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.894890 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.894900 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.894911 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.894921 | orchestrator | 2025-05-04 01:08:17.894931 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-04 01:08:17.894942 | orchestrator | Sunday 04 May 2025 01:06:32 +0000 (0:00:01.002) 0:01:36.429 ************ 2025-05-04 01:08:17.894963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.894986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.894996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-04 01:08:17.895023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.895049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.895077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-04 01:08:17.895092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-04 01:08:17.895245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-04 01:08:17.895266 | orchestrator | 2025-05-04 01:08:17.895276 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-04 01:08:17.895287 | orchestrator | Sunday 04 May 2025 01:06:35 +0000 (0:00:02.934) 0:01:39.363 ************ 2025-05-04 01:08:17.895302 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:17.895313 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:08:17.895323 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:08:17.895333 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:08:17.895343 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:08:17.895353 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:08:17.895364 | orchestrator | 2025-05-04 01:08:17.895374 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-04 01:08:17.895384 | orchestrator | Sunday 04 May 2025 01:06:36 +0000 (0:00:00.567) 0:01:39.930 ************ 2025-05-04 01:08:17.895395 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:17.895405 | orchestrator | 2025-05-04 01:08:17.895415 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-04 01:08:17.895425 | orchestrator | Sunday 04 May 2025 01:06:38 +0000 (0:00:02.487) 0:01:42.418 ************ 2025-05-04 01:08:17.895436 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:17.895446 | orchestrator | 2025-05-04 01:08:17.895456 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-04 01:08:17.895467 | orchestrator | Sunday 04 May 2025 01:06:41 +0000 (0:00:02.511) 0:01:44.930 ************ 2025-05-04 01:08:17.895477 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:17.895487 | orchestrator | 2025-05-04 01:08:17.895497 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-04 01:08:17.895507 | orchestrator | Sunday 04 May 2025 01:07:01 +0000 (0:00:20.255) 0:02:05.185 ************ 2025-05-04 01:08:17.895518 | orchestrator | 2025-05-04 01:08:17.895528 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-04 01:08:17.895538 | orchestrator | Sunday 04 May 2025 01:07:01 +0000 (0:00:00.057) 0:02:05.243 ************ 2025-05-04 01:08:17.895548 | orchestrator | 2025-05-04 01:08:17.895559 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-04 01:08:17.895569 | orchestrator | Sunday 04 May 2025 01:07:01 +0000 (0:00:00.219) 0:02:05.462 ************ 2025-05-04 01:08:17.895579 | orchestrator | 2025-05-04 01:08:17.895589 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-04 01:08:17.895600 | orchestrator | Sunday 04 May 2025 01:07:01 +0000 (0:00:00.049) 0:02:05.512 ************ 2025-05-04 01:08:17.895610 | orchestrator | 2025-05-04 01:08:17.895620 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-04 01:08:17.895630 | orchestrator | Sunday 04 May 2025 01:07:01 +0000 (0:00:00.052) 0:02:05.565 ************ 2025-05-04 01:08:17.895640 | orchestrator | 2025-05-04 01:08:17.895651 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-04 01:08:17.895661 | orchestrator | Sunday 04 May 2025 01:07:01 +0000 (0:00:00.049) 0:02:05.615 ************ 2025-05-04 01:08:17.895671 | orchestrator | 2025-05-04 01:08:17.895681 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-04 01:08:17.895691 | orchestrator | Sunday 04 May 2025 01:07:02 +0000 (0:00:00.179) 0:02:05.794 ************ 2025-05-04 01:08:17.895702 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:17.895712 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:08:17.895722 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:08:17.895733 | orchestrator | 2025-05-04 01:08:17.895743 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-04 01:08:17.895753 | orchestrator | Sunday 04 May 2025 01:07:24 +0000 (0:00:22.157) 0:02:27.952 ************ 2025-05-04 01:08:17.895763 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:08:17.895774 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:08:17.895784 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:08:17.895794 | orchestrator | 2025-05-04 01:08:17.895804 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-04 01:08:17.895834 | orchestrator | Sunday 04 May 2025 01:07:37 +0000 (0:00:12.898) 0:02:40.850 ************ 2025-05-04 01:08:20.943181 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:08:20.943339 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:08:20.943358 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:08:20.943374 | orchestrator | 2025-05-04 01:08:20.943390 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-04 01:08:20.943406 | orchestrator | Sunday 04 May 2025 01:08:03 +0000 (0:00:26.842) 0:03:07.692 ************ 2025-05-04 01:08:20.943421 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:08:20.943435 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:08:20.943450 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:08:20.943464 | orchestrator | 2025-05-04 01:08:20.943479 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-04 01:08:20.943494 | orchestrator | Sunday 04 May 2025 01:08:15 +0000 (0:00:11.284) 0:03:18.977 ************ 2025-05-04 01:08:20.943508 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:08:20.943522 | orchestrator | 2025-05-04 01:08:20.943537 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:08:20.943552 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-04 01:08:20.943569 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-04 01:08:20.943583 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-04 01:08:20.943597 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:08:20.943612 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:08:20.943626 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:08:20.943640 | orchestrator | 2025-05-04 01:08:20.943655 | orchestrator | 2025-05-04 01:08:20.943669 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:08:20.943684 | orchestrator | Sunday 04 May 2025 01:08:15 +0000 (0:00:00.691) 0:03:19.668 ************ 2025-05-04 01:08:20.943700 | orchestrator | =============================================================================== 2025-05-04 01:08:20.943716 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.84s 2025-05-04 01:08:20.943733 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.16s 2025-05-04 01:08:20.943749 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.26s 2025-05-04 01:08:20.943766 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.90s 2025-05-04 01:08:20.943783 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.36s 2025-05-04 01:08:20.943800 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.28s 2025-05-04 01:08:20.943850 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.32s 2025-05-04 01:08:20.943867 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.87s 2025-05-04 01:08:20.944034 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.53s 2025-05-04 01:08:20.944051 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.32s 2025-05-04 01:08:20.944067 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 4.32s 2025-05-04 01:08:20.944082 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 4.26s 2025-05-04 01:08:20.944096 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.17s 2025-05-04 01:08:20.944110 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2025-05-04 01:08:20.944137 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.60s 2025-05-04 01:08:20.944152 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.44s 2025-05-04 01:08:20.944166 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.41s 2025-05-04 01:08:20.944180 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.36s 2025-05-04 01:08:20.944195 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.02s 2025-05-04 01:08:20.944209 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.93s 2025-05-04 01:08:20.944223 | orchestrator | 2025-05-04 01:08:17 | INFO  | Task 19d7d962-13b3-43c9-8286-f56280be194a is in state SUCCESS 2025-05-04 01:08:20.944238 | orchestrator | 2025-05-04 01:08:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:20.944252 | orchestrator | 2025-05-04 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:20.944286 | orchestrator | 2025-05-04 01:08:20 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:20.944388 | orchestrator | 2025-05-04 01:08:20 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:20.944413 | orchestrator | 2025-05-04 01:08:20 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:20.945339 | orchestrator | 2025-05-04 01:08:20 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:20.946192 | orchestrator | 2025-05-04 01:08:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:20.946349 | orchestrator | 2025-05-04 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:23.997256 | orchestrator | 2025-05-04 01:08:23 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:24.004984 | orchestrator | 2025-05-04 01:08:23 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:24.010873 | orchestrator | 2025-05-04 01:08:24 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:24.015717 | orchestrator | 2025-05-04 01:08:24 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:24.016989 | orchestrator | 2025-05-04 01:08:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:27.059266 | orchestrator | 2025-05-04 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:27.059414 | orchestrator | 2025-05-04 01:08:27 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:27.059940 | orchestrator | 2025-05-04 01:08:27 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:27.061571 | orchestrator | 2025-05-04 01:08:27 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:27.062948 | orchestrator | 2025-05-04 01:08:27 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:27.064538 | orchestrator | 2025-05-04 01:08:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:27.064769 | orchestrator | 2025-05-04 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:30.125065 | orchestrator | 2025-05-04 01:08:30 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:30.126600 | orchestrator | 2025-05-04 01:08:30 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:30.130274 | orchestrator | 2025-05-04 01:08:30 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:30.131989 | orchestrator | 2025-05-04 01:08:30 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:30.133438 | orchestrator | 2025-05-04 01:08:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:30.133565 | orchestrator | 2025-05-04 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:33.182759 | orchestrator | 2025-05-04 01:08:33 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:33.185097 | orchestrator | 2025-05-04 01:08:33 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:33.185196 | orchestrator | 2025-05-04 01:08:33 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:33.185735 | orchestrator | 2025-05-04 01:08:33 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:33.188625 | orchestrator | 2025-05-04 01:08:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:36.245066 | orchestrator | 2025-05-04 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:36.245223 | orchestrator | 2025-05-04 01:08:36 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:36.246599 | orchestrator | 2025-05-04 01:08:36 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:36.247953 | orchestrator | 2025-05-04 01:08:36 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:36.249916 | orchestrator | 2025-05-04 01:08:36 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:36.251215 | orchestrator | 2025-05-04 01:08:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:36.251686 | orchestrator | 2025-05-04 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:39.296330 | orchestrator | 2025-05-04 01:08:39 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:39.298404 | orchestrator | 2025-05-04 01:08:39 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:39.300122 | orchestrator | 2025-05-04 01:08:39 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:39.301708 | orchestrator | 2025-05-04 01:08:39 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:39.303143 | orchestrator | 2025-05-04 01:08:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:42.352482 | orchestrator | 2025-05-04 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:42.352656 | orchestrator | 2025-05-04 01:08:42 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:42.353147 | orchestrator | 2025-05-04 01:08:42 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:42.354357 | orchestrator | 2025-05-04 01:08:42 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:42.355428 | orchestrator | 2025-05-04 01:08:42 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:42.356478 | orchestrator | 2025-05-04 01:08:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:42.356786 | orchestrator | 2025-05-04 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:45.411459 | orchestrator | 2025-05-04 01:08:45 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:45.414294 | orchestrator | 2025-05-04 01:08:45 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:45.417400 | orchestrator | 2025-05-04 01:08:45 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:45.418872 | orchestrator | 2025-05-04 01:08:45 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:45.422229 | orchestrator | 2025-05-04 01:08:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:45.422471 | orchestrator | 2025-05-04 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:48.462841 | orchestrator | 2025-05-04 01:08:48 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:48.464078 | orchestrator | 2025-05-04 01:08:48 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:48.466076 | orchestrator | 2025-05-04 01:08:48 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:48.467874 | orchestrator | 2025-05-04 01:08:48 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:48.470283 | orchestrator | 2025-05-04 01:08:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:51.521158 | orchestrator | 2025-05-04 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:51.521301 | orchestrator | 2025-05-04 01:08:51 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:51.522825 | orchestrator | 2025-05-04 01:08:51 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:51.524338 | orchestrator | 2025-05-04 01:08:51 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:51.526591 | orchestrator | 2025-05-04 01:08:51 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:51.529257 | orchestrator | 2025-05-04 01:08:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:54.575717 | orchestrator | 2025-05-04 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:54.575924 | orchestrator | 2025-05-04 01:08:54 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:54.576664 | orchestrator | 2025-05-04 01:08:54 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:54.578610 | orchestrator | 2025-05-04 01:08:54 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:54.580412 | orchestrator | 2025-05-04 01:08:54 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:54.582584 | orchestrator | 2025-05-04 01:08:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:57.620390 | orchestrator | 2025-05-04 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:08:57.620537 | orchestrator | 2025-05-04 01:08:57 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:08:57.621371 | orchestrator | 2025-05-04 01:08:57 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:08:57.622115 | orchestrator | 2025-05-04 01:08:57 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:08:57.622152 | orchestrator | 2025-05-04 01:08:57 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:08:57.623122 | orchestrator | 2025-05-04 01:08:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:08:57.623291 | orchestrator | 2025-05-04 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:00.663772 | orchestrator | 2025-05-04 01:09:00 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:00.667901 | orchestrator | 2025-05-04 01:09:00 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:00.672860 | orchestrator | 2025-05-04 01:09:00 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:00.674986 | orchestrator | 2025-05-04 01:09:00 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:09:00.677200 | orchestrator | 2025-05-04 01:09:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:00.677334 | orchestrator | 2025-05-04 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:03.725874 | orchestrator | 2025-05-04 01:09:03 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:03.726725 | orchestrator | 2025-05-04 01:09:03 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:03.728373 | orchestrator | 2025-05-04 01:09:03 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:03.729547 | orchestrator | 2025-05-04 01:09:03 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:09:03.731060 | orchestrator | 2025-05-04 01:09:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:06.794467 | orchestrator | 2025-05-04 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:06.794624 | orchestrator | 2025-05-04 01:09:06 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:06.796076 | orchestrator | 2025-05-04 01:09:06 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:06.796119 | orchestrator | 2025-05-04 01:09:06 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:06.796147 | orchestrator | 2025-05-04 01:09:06 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state STARTED 2025-05-04 01:09:06.797409 | orchestrator | 2025-05-04 01:09:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:06.797587 | orchestrator | 2025-05-04 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:09.848066 | orchestrator | 2025-05-04 01:09:09 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:09.850699 | orchestrator | 2025-05-04 01:09:09 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:09.852344 | orchestrator | 2025-05-04 01:09:09 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:09.853352 | orchestrator | 2025-05-04 01:09:09 | INFO  | Task 5176e9f3-d652-4dc9-a284-306e51f9e6fd is in state SUCCESS 2025-05-04 01:09:09.854710 | orchestrator | 2025-05-04 01:09:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:12.911340 | orchestrator | 2025-05-04 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:12.911506 | orchestrator | 2025-05-04 01:09:12 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:12.914152 | orchestrator | 2025-05-04 01:09:12 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:12.916146 | orchestrator | 2025-05-04 01:09:12 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:12.917564 | orchestrator | 2025-05-04 01:09:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:15.974647 | orchestrator | 2025-05-04 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:15.974983 | orchestrator | 2025-05-04 01:09:15 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:15.976495 | orchestrator | 2025-05-04 01:09:15 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:15.976527 | orchestrator | 2025-05-04 01:09:15 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:15.976578 | orchestrator | 2025-05-04 01:09:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:19.021751 | orchestrator | 2025-05-04 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:19.021991 | orchestrator | 2025-05-04 01:09:19 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:19.024434 | orchestrator | 2025-05-04 01:09:19 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:19.028987 | orchestrator | 2025-05-04 01:09:19 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:19.030404 | orchestrator | 2025-05-04 01:09:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:22.079290 | orchestrator | 2025-05-04 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:22.079413 | orchestrator | 2025-05-04 01:09:22 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:22.080894 | orchestrator | 2025-05-04 01:09:22 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:22.082965 | orchestrator | 2025-05-04 01:09:22 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:22.085339 | orchestrator | 2025-05-04 01:09:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:22.085565 | orchestrator | 2025-05-04 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:25.131043 | orchestrator | 2025-05-04 01:09:25 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:25.132108 | orchestrator | 2025-05-04 01:09:25 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:25.133954 | orchestrator | 2025-05-04 01:09:25 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:25.135828 | orchestrator | 2025-05-04 01:09:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:28.196047 | orchestrator | 2025-05-04 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:28.196199 | orchestrator | 2025-05-04 01:09:28 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:28.197214 | orchestrator | 2025-05-04 01:09:28 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:28.200196 | orchestrator | 2025-05-04 01:09:28 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:31.266703 | orchestrator | 2025-05-04 01:09:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:31.266891 | orchestrator | 2025-05-04 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:31.267115 | orchestrator | 2025-05-04 01:09:31 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:31.267143 | orchestrator | 2025-05-04 01:09:31 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:31.267196 | orchestrator | 2025-05-04 01:09:31 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:31.267218 | orchestrator | 2025-05-04 01:09:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:34.315283 | orchestrator | 2025-05-04 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:34.315415 | orchestrator | 2025-05-04 01:09:34 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:34.315877 | orchestrator | 2025-05-04 01:09:34 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:34.316789 | orchestrator | 2025-05-04 01:09:34 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:34.318129 | orchestrator | 2025-05-04 01:09:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:37.366367 | orchestrator | 2025-05-04 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:37.366548 | orchestrator | 2025-05-04 01:09:37 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:37.369750 | orchestrator | 2025-05-04 01:09:37 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:37.369918 | orchestrator | 2025-05-04 01:09:37 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:37.371739 | orchestrator | 2025-05-04 01:09:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:40.434938 | orchestrator | 2025-05-04 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:40.435087 | orchestrator | 2025-05-04 01:09:40 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:40.435294 | orchestrator | 2025-05-04 01:09:40 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:40.436340 | orchestrator | 2025-05-04 01:09:40 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:40.439403 | orchestrator | 2025-05-04 01:09:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:43.490311 | orchestrator | 2025-05-04 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:43.490464 | orchestrator | 2025-05-04 01:09:43 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:43.491283 | orchestrator | 2025-05-04 01:09:43 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:43.492748 | orchestrator | 2025-05-04 01:09:43 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:43.493834 | orchestrator | 2025-05-04 01:09:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:43.494199 | orchestrator | 2025-05-04 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:46.540906 | orchestrator | 2025-05-04 01:09:46 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:46.542594 | orchestrator | 2025-05-04 01:09:46 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:46.544914 | orchestrator | 2025-05-04 01:09:46 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:46.546699 | orchestrator | 2025-05-04 01:09:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:49.591817 | orchestrator | 2025-05-04 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:49.591983 | orchestrator | 2025-05-04 01:09:49 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:49.593344 | orchestrator | 2025-05-04 01:09:49 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:49.594835 | orchestrator | 2025-05-04 01:09:49 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:49.597576 | orchestrator | 2025-05-04 01:09:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:52.648305 | orchestrator | 2025-05-04 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:52.648451 | orchestrator | 2025-05-04 01:09:52 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:52.650857 | orchestrator | 2025-05-04 01:09:52 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:52.652854 | orchestrator | 2025-05-04 01:09:52 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:52.655009 | orchestrator | 2025-05-04 01:09:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:55.704837 | orchestrator | 2025-05-04 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:55.705022 | orchestrator | 2025-05-04 01:09:55 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:55.705216 | orchestrator | 2025-05-04 01:09:55 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:55.705254 | orchestrator | 2025-05-04 01:09:55 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:55.707658 | orchestrator | 2025-05-04 01:09:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:58.750647 | orchestrator | 2025-05-04 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:09:58.750846 | orchestrator | 2025-05-04 01:09:58 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:09:58.752333 | orchestrator | 2025-05-04 01:09:58 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:09:58.753335 | orchestrator | 2025-05-04 01:09:58 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:09:58.754149 | orchestrator | 2025-05-04 01:09:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:09:58.754399 | orchestrator | 2025-05-04 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:01.804736 | orchestrator | 2025-05-04 01:10:01 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:10:01.806343 | orchestrator | 2025-05-04 01:10:01 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:01.809054 | orchestrator | 2025-05-04 01:10:01 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state STARTED 2025-05-04 01:10:04.858815 | orchestrator | 2025-05-04 01:10:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:04.858949 | orchestrator | 2025-05-04 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:04.858988 | orchestrator | 2025-05-04 01:10:04 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state STARTED 2025-05-04 01:10:04.860619 | orchestrator | 2025-05-04 01:10:04 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:04.862089 | orchestrator | 2025-05-04 01:10:04 | INFO  | Task 917f9bd2-9bbc-4965-bf0e-1d60ea01cca9 is in state SUCCESS 2025-05-04 01:10:04.863929 | orchestrator | 2025-05-04 01:10:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:07.924058 | orchestrator | 2025-05-04 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:07.924207 | orchestrator | 2025-05-04 01:10:07.924228 | orchestrator | 2025-05-04 01:10:07.924243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:10:07.924258 | orchestrator | 2025-05-04 01:10:07.924273 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:10:07.924287 | orchestrator | Sunday 04 May 2025 01:08:14 +0000 (0:00:00.328) 0:00:00.328 ************ 2025-05-04 01:10:07.924302 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.924345 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:10:07.924361 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:10:07.924375 | orchestrator | 2025-05-04 01:10:07.924390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:10:07.924404 | orchestrator | Sunday 04 May 2025 01:08:15 +0000 (0:00:00.401) 0:00:00.729 ************ 2025-05-04 01:10:07.924419 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-04 01:10:07.924434 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-04 01:10:07.924450 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-04 01:10:07.924467 | orchestrator | 2025-05-04 01:10:07.924482 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-04 01:10:07.924498 | orchestrator | 2025-05-04 01:10:07.924515 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-04 01:10:07.924531 | orchestrator | Sunday 04 May 2025 01:08:15 +0000 (0:00:00.321) 0:00:01.051 ************ 2025-05-04 01:10:07.924547 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:10:07.924565 | orchestrator | 2025-05-04 01:10:07.924581 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-04 01:10:07.924615 | orchestrator | Sunday 04 May 2025 01:08:16 +0000 (0:00:00.833) 0:00:01.884 ************ 2025-05-04 01:10:07.924634 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-04 01:10:07.924650 | orchestrator | 2025-05-04 01:10:07.924665 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-04 01:10:07.924681 | orchestrator | Sunday 04 May 2025 01:08:19 +0000 (0:00:03.547) 0:00:05.432 ************ 2025-05-04 01:10:07.924696 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-04 01:10:07.924712 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-04 01:10:07.924729 | orchestrator | 2025-05-04 01:10:07.924767 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-04 01:10:07.924783 | orchestrator | Sunday 04 May 2025 01:08:26 +0000 (0:00:06.629) 0:00:12.062 ************ 2025-05-04 01:10:07.924798 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:10:07.924812 | orchestrator | 2025-05-04 01:10:07.924827 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-04 01:10:07.924841 | orchestrator | Sunday 04 May 2025 01:08:29 +0000 (0:00:03.356) 0:00:15.418 ************ 2025-05-04 01:10:07.924855 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:10:07.924870 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-04 01:10:07.924884 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-04 01:10:07.924898 | orchestrator | 2025-05-04 01:10:07.924913 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-04 01:10:07.924927 | orchestrator | Sunday 04 May 2025 01:08:38 +0000 (0:00:08.367) 0:00:23.786 ************ 2025-05-04 01:10:07.924941 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:10:07.924955 | orchestrator | 2025-05-04 01:10:07.924969 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-04 01:10:07.924983 | orchestrator | Sunday 04 May 2025 01:08:41 +0000 (0:00:03.359) 0:00:27.145 ************ 2025-05-04 01:10:07.925024 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-04 01:10:07.925039 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-04 01:10:07.925053 | orchestrator | 2025-05-04 01:10:07.925068 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-04 01:10:07.925082 | orchestrator | Sunday 04 May 2025 01:08:49 +0000 (0:00:07.877) 0:00:35.023 ************ 2025-05-04 01:10:07.925096 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-04 01:10:07.925110 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-04 01:10:07.925125 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-04 01:10:07.925139 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-04 01:10:07.925153 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-04 01:10:07.925167 | orchestrator | 2025-05-04 01:10:07.925181 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-04 01:10:07.925196 | orchestrator | Sunday 04 May 2025 01:09:05 +0000 (0:00:15.721) 0:00:50.744 ************ 2025-05-04 01:10:07.925210 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:10:07.925224 | orchestrator | 2025-05-04 01:10:07.925239 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-04 01:10:07.925253 | orchestrator | Sunday 04 May 2025 01:09:06 +0000 (0:00:00.878) 0:00:51.623 ************ 2025-05-04 01:10:07.925280 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-04 01:10:07.925300 | orchestrator | 2025-05-04 01:10:07.925314 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:10:07.925335 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-04 01:10:07.925351 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:10:07.925367 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:10:07.925381 | orchestrator | 2025-05-04 01:10:07.925395 | orchestrator | 2025-05-04 01:10:07.925410 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:10:07.925424 | orchestrator | Sunday 04 May 2025 01:09:09 +0000 (0:00:03.295) 0:00:54.919 ************ 2025-05-04 01:10:07.925439 | orchestrator | =============================================================================== 2025-05-04 01:10:07.925453 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.72s 2025-05-04 01:10:07.925468 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.37s 2025-05-04 01:10:07.925482 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.88s 2025-05-04 01:10:07.925497 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.63s 2025-05-04 01:10:07.925511 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.55s 2025-05-04 01:10:07.925525 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.36s 2025-05-04 01:10:07.925539 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.36s 2025-05-04 01:10:07.925553 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.30s 2025-05-04 01:10:07.925575 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.88s 2025-05-04 01:10:07.925590 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.83s 2025-05-04 01:10:07.925604 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-04 01:10:07.925624 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-05-04 01:10:07.925638 | orchestrator | 2025-05-04 01:10:07.925653 | orchestrator | 2025-05-04 01:10:07.925667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:10:07.925681 | orchestrator | 2025-05-04 01:10:07.925696 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:10:07.925710 | orchestrator | Sunday 04 May 2025 01:07:44 +0000 (0:00:00.247) 0:00:00.247 ************ 2025-05-04 01:10:07.925725 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.925758 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:10:07.925774 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:10:07.925789 | orchestrator | 2025-05-04 01:10:07.925803 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:10:07.925817 | orchestrator | Sunday 04 May 2025 01:07:44 +0000 (0:00:00.513) 0:00:00.761 ************ 2025-05-04 01:10:07.925832 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-04 01:10:07.925846 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-04 01:10:07.925861 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-04 01:10:07.925875 | orchestrator | 2025-05-04 01:10:07.925889 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-04 01:10:07.925904 | orchestrator | 2025-05-04 01:10:07.925918 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-04 01:10:07.925932 | orchestrator | Sunday 04 May 2025 01:07:45 +0000 (0:00:00.631) 0:00:01.392 ************ 2025-05-04 01:10:07.925947 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.925961 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:10:07.925975 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:10:07.925990 | orchestrator | 2025-05-04 01:10:07.926004 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:10:07.926069 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:10:07.926088 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:10:07.926102 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:10:07.926117 | orchestrator | 2025-05-04 01:10:07.926131 | orchestrator | 2025-05-04 01:10:07.926145 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:10:07.926160 | orchestrator | Sunday 04 May 2025 01:10:03 +0000 (0:02:18.280) 0:02:19.672 ************ 2025-05-04 01:10:07.926174 | orchestrator | =============================================================================== 2025-05-04 01:10:07.926188 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 138.28s 2025-05-04 01:10:07.926202 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-05-04 01:10:07.926216 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-05-04 01:10:07.926231 | orchestrator | 2025-05-04 01:10:07.926245 | orchestrator | 2025-05-04 01:10:07.926259 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:10:07.926273 | orchestrator | 2025-05-04 01:10:07.926287 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:10:07.926309 | orchestrator | Sunday 04 May 2025 01:08:19 +0000 (0:00:00.326) 0:00:00.326 ************ 2025-05-04 01:10:07.926324 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.926338 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:10:07.926362 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:10:07.926385 | orchestrator | 2025-05-04 01:10:07.926399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:10:07.926414 | orchestrator | Sunday 04 May 2025 01:08:19 +0000 (0:00:00.414) 0:00:00.740 ************ 2025-05-04 01:10:07.926428 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-04 01:10:07.926449 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-04 01:10:07.926463 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-04 01:10:07.926478 | orchestrator | 2025-05-04 01:10:07.926492 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-04 01:10:07.926506 | orchestrator | 2025-05-04 01:10:07.926521 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-04 01:10:07.926535 | orchestrator | Sunday 04 May 2025 01:08:20 +0000 (0:00:00.304) 0:00:01.045 ************ 2025-05-04 01:10:07.926549 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:10:07.926564 | orchestrator | 2025-05-04 01:10:07.926578 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-04 01:10:07.926592 | orchestrator | Sunday 04 May 2025 01:08:20 +0000 (0:00:00.775) 0:00:01.821 ************ 2025-05-04 01:10:07.926607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.926626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.926641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.926656 | orchestrator | 2025-05-04 01:10:07.926670 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-04 01:10:07.926684 | orchestrator | Sunday 04 May 2025 01:08:21 +0000 (0:00:01.072) 0:00:02.894 ************ 2025-05-04 01:10:07.926699 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-04 01:10:07.926718 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-04 01:10:07.926764 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:10:07.926781 | orchestrator | 2025-05-04 01:10:07.926795 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-04 01:10:07.926817 | orchestrator | Sunday 04 May 2025 01:08:22 +0000 (0:00:00.512) 0:00:03.406 ************ 2025-05-04 01:10:07.926832 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:10:07.926846 | orchestrator | 2025-05-04 01:10:07.926861 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-04 01:10:07.926880 | orchestrator | Sunday 04 May 2025 01:08:22 +0000 (0:00:00.604) 0:00:04.010 ************ 2025-05-04 01:10:07.926904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.926919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.926935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.926949 | orchestrator | 2025-05-04 01:10:07.926964 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-04 01:10:07.926978 | orchestrator | Sunday 04 May 2025 01:08:24 +0000 (0:00:01.573) 0:00:05.583 ************ 2025-05-04 01:10:07.926993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 01:10:07.927008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 01:10:07.927030 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:07.927045 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:07.927066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 01:10:07.927081 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:07.927096 | orchestrator | 2025-05-04 01:10:07.927111 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-04 01:10:07.927125 | orchestrator | Sunday 04 May 2025 01:08:25 +0000 (0:00:00.630) 0:00:06.214 ************ 2025-05-04 01:10:07.927139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 01:10:07.927154 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:07.927168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 01:10:07.927183 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:07.927198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-04 01:10:07.927213 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:07.927227 | orchestrator | 2025-05-04 01:10:07.927241 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-04 01:10:07.927255 | orchestrator | Sunday 04 May 2025 01:08:25 +0000 (0:00:00.764) 0:00:06.979 ************ 2025-05-04 01:10:07.927276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.927292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.927313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.927328 | orchestrator | 2025-05-04 01:10:07.927448 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-04 01:10:07.927466 | orchestrator | Sunday 04 May 2025 01:08:27 +0000 (0:00:01.506) 0:00:08.485 ************ 2025-05-04 01:10:07.927480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.927496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.927511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.927534 | orchestrator | 2025-05-04 01:10:07.927550 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-04 01:10:07.927564 | orchestrator | Sunday 04 May 2025 01:08:29 +0000 (0:00:01.815) 0:00:10.301 ************ 2025-05-04 01:10:07.927578 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:07.927593 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:07.927607 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:07.927621 | orchestrator | 2025-05-04 01:10:07.927639 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-04 01:10:07.927659 | orchestrator | Sunday 04 May 2025 01:08:29 +0000 (0:00:00.307) 0:00:10.608 ************ 2025-05-04 01:10:07.927674 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-04 01:10:07.927688 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-04 01:10:07.927703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-04 01:10:07.927717 | orchestrator | 2025-05-04 01:10:07.927731 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-04 01:10:07.927767 | orchestrator | Sunday 04 May 2025 01:08:31 +0000 (0:00:01.454) 0:00:12.063 ************ 2025-05-04 01:10:07.927782 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-04 01:10:07.927796 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-04 01:10:07.927811 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-04 01:10:07.927825 | orchestrator | 2025-05-04 01:10:07.927846 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-04 01:10:07.927861 | orchestrator | Sunday 04 May 2025 01:08:32 +0000 (0:00:01.393) 0:00:13.457 ************ 2025-05-04 01:10:07.927875 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:10:07.927890 | orchestrator | 2025-05-04 01:10:07.927904 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-04 01:10:07.927918 | orchestrator | Sunday 04 May 2025 01:08:32 +0000 (0:00:00.462) 0:00:13.919 ************ 2025-05-04 01:10:07.927933 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-04 01:10:07.927947 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-04 01:10:07.927962 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.927976 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:10:07.927990 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:10:07.928004 | orchestrator | 2025-05-04 01:10:07.928018 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-04 01:10:07.928032 | orchestrator | Sunday 04 May 2025 01:08:34 +0000 (0:00:01.268) 0:00:15.188 ************ 2025-05-04 01:10:07.928046 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:07.928060 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:07.928074 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:07.928088 | orchestrator | 2025-05-04 01:10:07.928102 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-04 01:10:07.928117 | orchestrator | Sunday 04 May 2025 01:08:34 +0000 (0:00:00.460) 0:00:15.648 ************ 2025-05-04 01:10:07.928131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088764, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8493528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088764, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8493528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088764, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8493528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088750, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8433526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088750, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8433526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088750, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8433526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088709, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8373525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088709, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8373525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088709, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8373525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088759, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8463526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088759, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8463526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-detai2025-05-04 01:10:07 | INFO  | Task c86d24cf-90c1-409a-9870-6eab61ef41e1 is in state SUCCESS 2025-05-04 01:10:07.928894 | orchestrator | ls.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088759, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8463526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088701, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8193524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088701, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8193524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088701, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8193524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088741, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8383527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.928999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088741, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8383527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088741, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8383527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088757, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8453526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088757, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8453526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088757, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8453526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088698, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8183522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088698, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8183522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088698, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8183522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088637, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8053522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088637, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8053522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088637, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8053522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088704, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8193524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088704, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8193524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088704, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8193524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088650, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8093522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088650, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8093522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088650, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8093522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1088752, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8443527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1088752, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8443527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1088752, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8443527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1088707, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8213522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1088707, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8213522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1088707, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.8213522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088763, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8473527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088763, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8473527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088763, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8473527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088657, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8183522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088657, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8183522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088657, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8183522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088746, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8423526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088746, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8423526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.929988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088746, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8423526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088640, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8083522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088640, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8083522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088640, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8083522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088653, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8093522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088653, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8093522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088653, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8093522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088708, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8213522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088708, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8213522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088708, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8213522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088801, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.867353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088801, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.867353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088801, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.867353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088794, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8593528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088794, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8593528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088794, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8593528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088850, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.871353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088850, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.871353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088850, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.871353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088770, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8503528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088770, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8503528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088770, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8503528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088858, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.873353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088858, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.873353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088858, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.873353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088830, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.868353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088830, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.868353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088830, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.868353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088836, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.868353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088836, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.868353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088774, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8513527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088774, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8513527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088836, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.868353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088798, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8603528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088798, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8603528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088774, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8513527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088866, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8743532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088866, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8743532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088798, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8603528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1088842, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.870353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088866, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8743532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1088842, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.870353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088779, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8543527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088779, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8543527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1088842, 'dev': 174, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746317663.870353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088777, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8523529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088777, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8523529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088779, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8543527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088784, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8563528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088784, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8563528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.930988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088777, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8523529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088788, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8593528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088784, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8563528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088788, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8593528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088872, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.875353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088788, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.8593528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088872, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.875353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088872, 'dev': 174, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746317663.875353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-04 01:10:07.931107 | orchestrator | 2025-05-04 01:10:07.931117 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-04 01:10:07.931128 | orchestrator | Sunday 04 May 2025 01:09:07 +0000 (0:00:33.312) 0:00:48.961 ************ 2025-05-04 01:10:07.931144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.931155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.931166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-04 01:10:07.931177 | orchestrator | 2025-05-04 01:10:07.931187 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-04 01:10:07.931197 | orchestrator | Sunday 04 May 2025 01:09:09 +0000 (0:00:01.093) 0:00:50.055 ************ 2025-05-04 01:10:07.931214 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:10:07.931224 | orchestrator | 2025-05-04 01:10:07.931235 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-04 01:10:07.931245 | orchestrator | Sunday 04 May 2025 01:09:11 +0000 (0:00:02.501) 0:00:52.556 ************ 2025-05-04 01:10:07.931256 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:10:07.931266 | orchestrator | 2025-05-04 01:10:07.931276 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-04 01:10:07.931286 | orchestrator | Sunday 04 May 2025 01:09:13 +0000 (0:00:02.181) 0:00:54.737 ************ 2025-05-04 01:10:07.931297 | orchestrator | 2025-05-04 01:10:07.931307 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-04 01:10:07.931317 | orchestrator | Sunday 04 May 2025 01:09:13 +0000 (0:00:00.060) 0:00:54.798 ************ 2025-05-04 01:10:07.931327 | orchestrator | 2025-05-04 01:10:07.931338 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-04 01:10:07.931348 | orchestrator | Sunday 04 May 2025 01:09:13 +0000 (0:00:00.070) 0:00:54.868 ************ 2025-05-04 01:10:07.931358 | orchestrator | 2025-05-04 01:10:07.931368 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-04 01:10:07.931379 | orchestrator | Sunday 04 May 2025 01:09:14 +0000 (0:00:00.226) 0:00:55.095 ************ 2025-05-04 01:10:07.931389 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:07.931399 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:07.931411 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:10:07.931429 | orchestrator | 2025-05-04 01:10:07.931441 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-04 01:10:07.931452 | orchestrator | Sunday 04 May 2025 01:09:15 +0000 (0:00:01.791) 0:00:56.887 ************ 2025-05-04 01:10:07.931462 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:07.931472 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:07.931482 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-04 01:10:07.931494 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-04 01:10:07.931504 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.931515 | orchestrator | 2025-05-04 01:10:07.931525 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-04 01:10:07.931535 | orchestrator | Sunday 04 May 2025 01:09:42 +0000 (0:00:26.613) 0:01:23.501 ************ 2025-05-04 01:10:07.931545 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:07.931555 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:10:07.931565 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:10:07.931576 | orchestrator | 2025-05-04 01:10:07.931586 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-04 01:10:07.931596 | orchestrator | Sunday 04 May 2025 01:10:01 +0000 (0:00:19.254) 0:01:42.755 ************ 2025-05-04 01:10:07.931606 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:10:07.931616 | orchestrator | 2025-05-04 01:10:07.931627 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-04 01:10:07.931637 | orchestrator | Sunday 04 May 2025 01:10:04 +0000 (0:00:02.303) 0:01:45.059 ************ 2025-05-04 01:10:07.931647 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:07.931662 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:10:10.971103 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:10:10.971231 | orchestrator | 2025-05-04 01:10:10.971253 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-04 01:10:10.971270 | orchestrator | Sunday 04 May 2025 01:10:04 +0000 (0:00:00.486) 0:01:45.545 ************ 2025-05-04 01:10:10.971286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-04 01:10:10.971351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-04 01:10:10.971369 | orchestrator | 2025-05-04 01:10:10.971384 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-04 01:10:10.971398 | orchestrator | Sunday 04 May 2025 01:10:06 +0000 (0:00:02.438) 0:01:47.984 ************ 2025-05-04 01:10:10.971412 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:10:10.971427 | orchestrator | 2025-05-04 01:10:10.971441 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:10:10.971457 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:10:10.971473 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:10:10.971488 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-04 01:10:10.971502 | orchestrator | 2025-05-04 01:10:10.971516 | orchestrator | 2025-05-04 01:10:10.971531 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:10:10.971545 | orchestrator | Sunday 04 May 2025 01:10:07 +0000 (0:00:00.367) 0:01:48.351 ************ 2025-05-04 01:10:10.971559 | orchestrator | =============================================================================== 2025-05-04 01:10:10.971573 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.31s 2025-05-04 01:10:10.971588 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.61s 2025-05-04 01:10:10.971602 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 19.25s 2025-05-04 01:10:10.971616 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.50s 2025-05-04 01:10:10.971632 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.44s 2025-05-04 01:10:10.971648 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2025-05-04 01:10:10.971664 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.18s 2025-05-04 01:10:10.971680 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.82s 2025-05-04 01:10:10.971695 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.79s 2025-05-04 01:10:10.971716 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.57s 2025-05-04 01:10:10.971797 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.51s 2025-05-04 01:10:10.971819 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.45s 2025-05-04 01:10:10.971835 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2025-05-04 01:10:10.971852 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.27s 2025-05-04 01:10:10.971868 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2025-05-04 01:10:10.971883 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.07s 2025-05-04 01:10:10.971900 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.78s 2025-05-04 01:10:10.971915 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.76s 2025-05-04 01:10:10.971931 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.63s 2025-05-04 01:10:10.971948 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2025-05-04 01:10:10.971964 | orchestrator | 2025-05-04 01:10:07 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:10.971991 | orchestrator | 2025-05-04 01:10:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:10.972006 | orchestrator | 2025-05-04 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:10.972038 | orchestrator | 2025-05-04 01:10:10 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:10.972608 | orchestrator | 2025-05-04 01:10:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:14.019192 | orchestrator | 2025-05-04 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:14.019348 | orchestrator | 2025-05-04 01:10:14 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:14.020493 | orchestrator | 2025-05-04 01:10:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:17.077594 | orchestrator | 2025-05-04 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:17.077805 | orchestrator | 2025-05-04 01:10:17 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:17.078239 | orchestrator | 2025-05-04 01:10:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:20.117381 | orchestrator | 2025-05-04 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:20.117576 | orchestrator | 2025-05-04 01:10:20 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:20.119887 | orchestrator | 2025-05-04 01:10:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:20.120082 | orchestrator | 2025-05-04 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:23.163324 | orchestrator | 2025-05-04 01:10:23 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:23.164388 | orchestrator | 2025-05-04 01:10:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:26.210396 | orchestrator | 2025-05-04 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:26.210586 | orchestrator | 2025-05-04 01:10:26 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:29.257772 | orchestrator | 2025-05-04 01:10:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:29.257908 | orchestrator | 2025-05-04 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:29.257947 | orchestrator | 2025-05-04 01:10:29 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:32.314349 | orchestrator | 2025-05-04 01:10:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:32.314489 | orchestrator | 2025-05-04 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:32.314650 | orchestrator | 2025-05-04 01:10:32 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:35.365269 | orchestrator | 2025-05-04 01:10:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:35.365407 | orchestrator | 2025-05-04 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:35.365445 | orchestrator | 2025-05-04 01:10:35 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:38.421952 | orchestrator | 2025-05-04 01:10:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:38.422138 | orchestrator | 2025-05-04 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:38.422217 | orchestrator | 2025-05-04 01:10:38 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:38.422689 | orchestrator | 2025-05-04 01:10:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:38.422891 | orchestrator | 2025-05-04 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:41.472259 | orchestrator | 2025-05-04 01:10:41 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:41.473259 | orchestrator | 2025-05-04 01:10:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:44.526293 | orchestrator | 2025-05-04 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:44.526426 | orchestrator | 2025-05-04 01:10:44 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:44.526953 | orchestrator | 2025-05-04 01:10:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:47.579189 | orchestrator | 2025-05-04 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:47.579343 | orchestrator | 2025-05-04 01:10:47 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:47.580644 | orchestrator | 2025-05-04 01:10:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:47.581109 | orchestrator | 2025-05-04 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:50.630213 | orchestrator | 2025-05-04 01:10:50 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:50.630523 | orchestrator | 2025-05-04 01:10:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:53.671811 | orchestrator | 2025-05-04 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:53.672022 | orchestrator | 2025-05-04 01:10:53 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:53.673333 | orchestrator | 2025-05-04 01:10:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:56.717196 | orchestrator | 2025-05-04 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:56.717351 | orchestrator | 2025-05-04 01:10:56 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:10:56.719762 | orchestrator | 2025-05-04 01:10:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:10:59.761953 | orchestrator | 2025-05-04 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:10:59.762328 | orchestrator | 2025-05-04 01:10:59 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:02.804515 | orchestrator | 2025-05-04 01:10:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:02.804643 | orchestrator | 2025-05-04 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:02.804681 | orchestrator | 2025-05-04 01:11:02 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:02.806129 | orchestrator | 2025-05-04 01:11:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:05.851232 | orchestrator | 2025-05-04 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:05.851388 | orchestrator | 2025-05-04 01:11:05 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:08.897639 | orchestrator | 2025-05-04 01:11:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:08.898157 | orchestrator | 2025-05-04 01:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:08.898222 | orchestrator | 2025-05-04 01:11:08 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:11.946907 | orchestrator | 2025-05-04 01:11:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:11.947049 | orchestrator | 2025-05-04 01:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:11.947089 | orchestrator | 2025-05-04 01:11:11 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:11.947912 | orchestrator | 2025-05-04 01:11:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:14.998005 | orchestrator | 2025-05-04 01:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:14.998270 | orchestrator | 2025-05-04 01:11:14 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:18.044378 | orchestrator | 2025-05-04 01:11:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:18.044518 | orchestrator | 2025-05-04 01:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:18.044555 | orchestrator | 2025-05-04 01:11:18 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:18.045212 | orchestrator | 2025-05-04 01:11:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:21.085664 | orchestrator | 2025-05-04 01:11:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:21.085869 | orchestrator | 2025-05-04 01:11:21 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:21.086390 | orchestrator | 2025-05-04 01:11:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:24.149876 | orchestrator | 2025-05-04 01:11:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:24.150098 | orchestrator | 2025-05-04 01:11:24 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:27.200785 | orchestrator | 2025-05-04 01:11:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:27.200929 | orchestrator | 2025-05-04 01:11:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:27.200970 | orchestrator | 2025-05-04 01:11:27 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:27.201733 | orchestrator | 2025-05-04 01:11:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:30.246929 | orchestrator | 2025-05-04 01:11:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:30.247056 | orchestrator | 2025-05-04 01:11:30 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:33.290234 | orchestrator | 2025-05-04 01:11:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:33.290361 | orchestrator | 2025-05-04 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:33.290399 | orchestrator | 2025-05-04 01:11:33 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:33.291096 | orchestrator | 2025-05-04 01:11:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:36.339619 | orchestrator | 2025-05-04 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:36.339828 | orchestrator | 2025-05-04 01:11:36 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:36.341067 | orchestrator | 2025-05-04 01:11:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:39.395455 | orchestrator | 2025-05-04 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:39.395613 | orchestrator | 2025-05-04 01:11:39 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:42.483920 | orchestrator | 2025-05-04 01:11:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:42.484094 | orchestrator | 2025-05-04 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:42.484133 | orchestrator | 2025-05-04 01:11:42 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:42.485396 | orchestrator | 2025-05-04 01:11:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:42.485776 | orchestrator | 2025-05-04 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:45.538870 | orchestrator | 2025-05-04 01:11:45 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:45.541295 | orchestrator | 2025-05-04 01:11:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:48.609541 | orchestrator | 2025-05-04 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:48.609762 | orchestrator | 2025-05-04 01:11:48 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:48.610991 | orchestrator | 2025-05-04 01:11:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:48.611113 | orchestrator | 2025-05-04 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:51.660107 | orchestrator | 2025-05-04 01:11:51 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:51.662621 | orchestrator | 2025-05-04 01:11:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:54.709443 | orchestrator | 2025-05-04 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:54.709631 | orchestrator | 2025-05-04 01:11:54 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:57.756297 | orchestrator | 2025-05-04 01:11:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:11:57.756434 | orchestrator | 2025-05-04 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:11:57.756475 | orchestrator | 2025-05-04 01:11:57 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:11:57.757423 | orchestrator | 2025-05-04 01:11:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:00.807225 | orchestrator | 2025-05-04 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:00.807373 | orchestrator | 2025-05-04 01:12:00 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:00.808353 | orchestrator | 2025-05-04 01:12:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:03.861036 | orchestrator | 2025-05-04 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:03.861226 | orchestrator | 2025-05-04 01:12:03 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:03.861943 | orchestrator | 2025-05-04 01:12:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:06.917857 | orchestrator | 2025-05-04 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:06.918111 | orchestrator | 2025-05-04 01:12:06 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:06.919249 | orchestrator | 2025-05-04 01:12:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:09.974235 | orchestrator | 2025-05-04 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:09.974392 | orchestrator | 2025-05-04 01:12:09 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:13.031149 | orchestrator | 2025-05-04 01:12:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:13.031383 | orchestrator | 2025-05-04 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:13.031428 | orchestrator | 2025-05-04 01:12:13 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:16.084145 | orchestrator | 2025-05-04 01:12:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:16.084243 | orchestrator | 2025-05-04 01:12:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:16.084264 | orchestrator | 2025-05-04 01:12:16 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:16.086436 | orchestrator | 2025-05-04 01:12:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:19.136772 | orchestrator | 2025-05-04 01:12:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:19.136939 | orchestrator | 2025-05-04 01:12:19 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:19.137536 | orchestrator | 2025-05-04 01:12:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:22.191552 | orchestrator | 2025-05-04 01:12:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:22.191773 | orchestrator | 2025-05-04 01:12:22 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:22.192356 | orchestrator | 2025-05-04 01:12:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:25.235378 | orchestrator | 2025-05-04 01:12:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:25.235531 | orchestrator | 2025-05-04 01:12:25 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:25.236954 | orchestrator | 2025-05-04 01:12:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:28.289600 | orchestrator | 2025-05-04 01:12:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:28.289815 | orchestrator | 2025-05-04 01:12:28 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:28.291353 | orchestrator | 2025-05-04 01:12:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:31.345384 | orchestrator | 2025-05-04 01:12:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:31.345688 | orchestrator | 2025-05-04 01:12:31 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:34.397315 | orchestrator | 2025-05-04 01:12:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:34.397461 | orchestrator | 2025-05-04 01:12:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:34.397501 | orchestrator | 2025-05-04 01:12:34 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:37.462533 | orchestrator | 2025-05-04 01:12:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:37.462732 | orchestrator | 2025-05-04 01:12:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:37.462774 | orchestrator | 2025-05-04 01:12:37 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:37.465232 | orchestrator | 2025-05-04 01:12:37 | INFO  | Task 30a7b7f4-f785-40e3-991c-0441e08c7967 is in state STARTED 2025-05-04 01:12:37.468226 | orchestrator | 2025-05-04 01:12:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:40.533641 | orchestrator | 2025-05-04 01:12:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:40.533805 | orchestrator | 2025-05-04 01:12:40 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:40.535344 | orchestrator | 2025-05-04 01:12:40 | INFO  | Task 30a7b7f4-f785-40e3-991c-0441e08c7967 is in state STARTED 2025-05-04 01:12:40.535841 | orchestrator | 2025-05-04 01:12:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:43.596022 | orchestrator | 2025-05-04 01:12:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:43.596181 | orchestrator | 2025-05-04 01:12:43 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:43.597281 | orchestrator | 2025-05-04 01:12:43 | INFO  | Task 30a7b7f4-f785-40e3-991c-0441e08c7967 is in state STARTED 2025-05-04 01:12:43.598681 | orchestrator | 2025-05-04 01:12:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:46.653917 | orchestrator | 2025-05-04 01:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:46.654144 | orchestrator | 2025-05-04 01:12:46 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:46.655550 | orchestrator | 2025-05-04 01:12:46 | INFO  | Task 30a7b7f4-f785-40e3-991c-0441e08c7967 is in state STARTED 2025-05-04 01:12:46.656928 | orchestrator | 2025-05-04 01:12:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:46.657468 | orchestrator | 2025-05-04 01:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:49.709536 | orchestrator | 2025-05-04 01:12:49 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:49.711947 | orchestrator | 2025-05-04 01:12:49 | INFO  | Task 30a7b7f4-f785-40e3-991c-0441e08c7967 is in state SUCCESS 2025-05-04 01:12:49.712092 | orchestrator | 2025-05-04 01:12:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:49.712122 | orchestrator | 2025-05-04 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:52.761581 | orchestrator | 2025-05-04 01:12:52 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:52.763116 | orchestrator | 2025-05-04 01:12:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:55.816005 | orchestrator | 2025-05-04 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:55.816151 | orchestrator | 2025-05-04 01:12:55 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:55.818520 | orchestrator | 2025-05-04 01:12:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:12:58.873438 | orchestrator | 2025-05-04 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:12:58.873678 | orchestrator | 2025-05-04 01:12:58 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:12:58.874403 | orchestrator | 2025-05-04 01:12:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:01.929108 | orchestrator | 2025-05-04 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:01.929255 | orchestrator | 2025-05-04 01:13:01 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:01.929899 | orchestrator | 2025-05-04 01:13:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:04.994625 | orchestrator | 2025-05-04 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:04.994787 | orchestrator | 2025-05-04 01:13:04 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:04.997626 | orchestrator | 2025-05-04 01:13:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:08.070269 | orchestrator | 2025-05-04 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:08.070423 | orchestrator | 2025-05-04 01:13:08 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:08.071896 | orchestrator | 2025-05-04 01:13:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:08.072169 | orchestrator | 2025-05-04 01:13:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:11.125268 | orchestrator | 2025-05-04 01:13:11 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:11.126170 | orchestrator | 2025-05-04 01:13:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:14.183912 | orchestrator | 2025-05-04 01:13:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:14.184062 | orchestrator | 2025-05-04 01:13:14 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:14.184894 | orchestrator | 2025-05-04 01:13:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:14.185091 | orchestrator | 2025-05-04 01:13:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:17.235639 | orchestrator | 2025-05-04 01:13:17 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:17.236426 | orchestrator | 2025-05-04 01:13:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:20.292127 | orchestrator | 2025-05-04 01:13:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:20.292304 | orchestrator | 2025-05-04 01:13:20 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:23.348943 | orchestrator | 2025-05-04 01:13:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:23.349081 | orchestrator | 2025-05-04 01:13:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:23.349118 | orchestrator | 2025-05-04 01:13:23 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:23.350792 | orchestrator | 2025-05-04 01:13:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:23.351197 | orchestrator | 2025-05-04 01:13:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:26.399419 | orchestrator | 2025-05-04 01:13:26 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:26.400250 | orchestrator | 2025-05-04 01:13:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:29.457802 | orchestrator | 2025-05-04 01:13:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:29.457966 | orchestrator | 2025-05-04 01:13:29 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:29.461091 | orchestrator | 2025-05-04 01:13:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:32.530896 | orchestrator | 2025-05-04 01:13:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:32.531016 | orchestrator | 2025-05-04 01:13:32 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:32.533483 | orchestrator | 2025-05-04 01:13:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:35.609459 | orchestrator | 2025-05-04 01:13:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:35.609647 | orchestrator | 2025-05-04 01:13:35 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:35.610683 | orchestrator | 2025-05-04 01:13:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:35.610993 | orchestrator | 2025-05-04 01:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:38.672776 | orchestrator | 2025-05-04 01:13:38 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:38.673694 | orchestrator | 2025-05-04 01:13:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:41.727246 | orchestrator | 2025-05-04 01:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:41.727433 | orchestrator | 2025-05-04 01:13:41 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:41.728070 | orchestrator | 2025-05-04 01:13:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:41.728301 | orchestrator | 2025-05-04 01:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:44.781154 | orchestrator | 2025-05-04 01:13:44 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:44.781856 | orchestrator | 2025-05-04 01:13:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:47.840191 | orchestrator | 2025-05-04 01:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:47.840353 | orchestrator | 2025-05-04 01:13:47 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:50.874704 | orchestrator | 2025-05-04 01:13:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:50.874837 | orchestrator | 2025-05-04 01:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:50.874875 | orchestrator | 2025-05-04 01:13:50 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:50.874963 | orchestrator | 2025-05-04 01:13:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:53.903960 | orchestrator | 2025-05-04 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:53.904096 | orchestrator | 2025-05-04 01:13:53 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:56.944023 | orchestrator | 2025-05-04 01:13:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:56.944180 | orchestrator | 2025-05-04 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:56.944248 | orchestrator | 2025-05-04 01:13:56 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:56.944342 | orchestrator | 2025-05-04 01:13:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:13:59.985185 | orchestrator | 2025-05-04 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:13:59.985356 | orchestrator | 2025-05-04 01:13:59 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:13:59.986345 | orchestrator | 2025-05-04 01:13:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:03.036802 | orchestrator | 2025-05-04 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:03.036925 | orchestrator | 2025-05-04 01:14:03 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:03.037396 | orchestrator | 2025-05-04 01:14:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:06.088313 | orchestrator | 2025-05-04 01:14:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:06.088471 | orchestrator | 2025-05-04 01:14:06 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:06.089746 | orchestrator | 2025-05-04 01:14:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:06.089833 | orchestrator | 2025-05-04 01:14:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:09.137194 | orchestrator | 2025-05-04 01:14:09 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:09.139122 | orchestrator | 2025-05-04 01:14:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:12.184874 | orchestrator | 2025-05-04 01:14:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:12.185088 | orchestrator | 2025-05-04 01:14:12 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:12.185183 | orchestrator | 2025-05-04 01:14:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:15.241797 | orchestrator | 2025-05-04 01:14:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:15.241941 | orchestrator | 2025-05-04 01:14:15 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:15.244054 | orchestrator | 2025-05-04 01:14:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:15.244215 | orchestrator | 2025-05-04 01:14:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:18.295763 | orchestrator | 2025-05-04 01:14:18 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:18.297858 | orchestrator | 2025-05-04 01:14:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:21.351018 | orchestrator | 2025-05-04 01:14:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:21.351175 | orchestrator | 2025-05-04 01:14:21 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:21.353415 | orchestrator | 2025-05-04 01:14:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:24.395125 | orchestrator | 2025-05-04 01:14:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:24.395281 | orchestrator | 2025-05-04 01:14:24 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:24.396535 | orchestrator | 2025-05-04 01:14:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:27.452077 | orchestrator | 2025-05-04 01:14:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:27.452228 | orchestrator | 2025-05-04 01:14:27 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state STARTED 2025-05-04 01:14:27.453751 | orchestrator | 2025-05-04 01:14:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:30.502374 | orchestrator | 2025-05-04 01:14:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:30.502531 | orchestrator | 2025-05-04 01:14:30 | INFO  | Task c62bfd00-2b64-4f36-9ffb-5480f8b81155 is in state SUCCESS 2025-05-04 01:14:30.504516 | orchestrator | 2025-05-04 01:14:30.504587 | orchestrator | None 2025-05-04 01:14:30.504604 | orchestrator | 2025-05-04 01:14:30.504619 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-04 01:14:30.504633 | orchestrator | 2025-05-04 01:14:30.504648 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-04 01:14:30.504662 | orchestrator | Sunday 04 May 2025 01:06:03 +0000 (0:00:00.608) 0:00:00.608 ************ 2025-05-04 01:14:30.504714 | orchestrator | changed: [testbed-manager] 2025-05-04 01:14:30.504733 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.504748 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.504762 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.504776 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.504791 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.504805 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.504819 | orchestrator | 2025-05-04 01:14:30.504834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-04 01:14:30.504848 | orchestrator | Sunday 04 May 2025 01:06:06 +0000 (0:00:02.458) 0:00:03.066 ************ 2025-05-04 01:14:30.504863 | orchestrator | changed: [testbed-manager] 2025-05-04 01:14:30.504877 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.504891 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.504905 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.504918 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.504933 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.504947 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.504978 | orchestrator | 2025-05-04 01:14:30.504993 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-04 01:14:30.505007 | orchestrator | Sunday 04 May 2025 01:06:07 +0000 (0:00:01.479) 0:00:04.546 ************ 2025-05-04 01:14:30.505021 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-04 01:14:30.505108 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-04 01:14:30.505153 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-04 01:14:30.505170 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-04 01:14:30.505186 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-04 01:14:30.505200 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-04 01:14:30.505214 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-04 01:14:30.505228 | orchestrator | 2025-05-04 01:14:30.505243 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-04 01:14:30.505257 | orchestrator | 2025-05-04 01:14:30.505271 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-04 01:14:30.505285 | orchestrator | Sunday 04 May 2025 01:06:08 +0000 (0:00:01.274) 0:00:05.821 ************ 2025-05-04 01:14:30.505299 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.505313 | orchestrator | 2025-05-04 01:14:30.505327 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-04 01:14:30.505341 | orchestrator | Sunday 04 May 2025 01:06:10 +0000 (0:00:01.212) 0:00:07.033 ************ 2025-05-04 01:14:30.505356 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-04 01:14:30.505371 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-04 01:14:30.505385 | orchestrator | 2025-05-04 01:14:30.505399 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-04 01:14:30.505413 | orchestrator | Sunday 04 May 2025 01:06:15 +0000 (0:00:04.995) 0:00:12.028 ************ 2025-05-04 01:14:30.505455 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 01:14:30.505469 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-04 01:14:30.505483 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.505497 | orchestrator | 2025-05-04 01:14:30.505512 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-04 01:14:30.505526 | orchestrator | Sunday 04 May 2025 01:06:19 +0000 (0:00:04.692) 0:00:16.721 ************ 2025-05-04 01:14:30.505539 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.505580 | orchestrator | 2025-05-04 01:14:30.505595 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-04 01:14:30.505609 | orchestrator | Sunday 04 May 2025 01:06:20 +0000 (0:00:01.065) 0:00:17.787 ************ 2025-05-04 01:14:30.505623 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.505654 | orchestrator | 2025-05-04 01:14:30.505668 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-04 01:14:30.505682 | orchestrator | Sunday 04 May 2025 01:06:23 +0000 (0:00:02.540) 0:00:20.327 ************ 2025-05-04 01:14:30.505696 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.505710 | orchestrator | 2025-05-04 01:14:30.505724 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-04 01:14:30.505744 | orchestrator | Sunday 04 May 2025 01:06:27 +0000 (0:00:03.704) 0:00:24.031 ************ 2025-05-04 01:14:30.505759 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.505773 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.505789 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.505814 | orchestrator | 2025-05-04 01:14:30.505829 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-04 01:14:30.505843 | orchestrator | Sunday 04 May 2025 01:06:28 +0000 (0:00:01.076) 0:00:25.107 ************ 2025-05-04 01:14:30.505857 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.505872 | orchestrator | 2025-05-04 01:14:30.505886 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-04 01:14:30.505900 | orchestrator | Sunday 04 May 2025 01:06:57 +0000 (0:00:29.389) 0:00:54.496 ************ 2025-05-04 01:14:30.505914 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.505928 | orchestrator | 2025-05-04 01:14:30.505942 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-04 01:14:30.505956 | orchestrator | Sunday 04 May 2025 01:07:10 +0000 (0:00:12.748) 0:01:07.245 ************ 2025-05-04 01:14:30.505970 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.505984 | orchestrator | 2025-05-04 01:14:30.505998 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-04 01:14:30.506012 | orchestrator | Sunday 04 May 2025 01:07:20 +0000 (0:00:10.382) 0:01:17.627 ************ 2025-05-04 01:14:30.506098 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.506114 | orchestrator | 2025-05-04 01:14:30.506129 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-04 01:14:30.506143 | orchestrator | Sunday 04 May 2025 01:07:21 +0000 (0:00:01.042) 0:01:18.670 ************ 2025-05-04 01:14:30.506156 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.506170 | orchestrator | 2025-05-04 01:14:30.506184 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-04 01:14:30.506199 | orchestrator | Sunday 04 May 2025 01:07:22 +0000 (0:00:00.695) 0:01:19.365 ************ 2025-05-04 01:14:30.506213 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.506228 | orchestrator | 2025-05-04 01:14:30.506242 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-04 01:14:30.506256 | orchestrator | Sunday 04 May 2025 01:07:23 +0000 (0:00:00.894) 0:01:20.260 ************ 2025-05-04 01:14:30.506270 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.506284 | orchestrator | 2025-05-04 01:14:30.506298 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-04 01:14:30.506312 | orchestrator | Sunday 04 May 2025 01:07:40 +0000 (0:00:16.947) 0:01:37.207 ************ 2025-05-04 01:14:30.506336 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.506350 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.506364 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.506379 | orchestrator | 2025-05-04 01:14:30.506393 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-04 01:14:30.506407 | orchestrator | 2025-05-04 01:14:30.506421 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-04 01:14:30.506435 | orchestrator | Sunday 04 May 2025 01:07:40 +0000 (0:00:00.301) 0:01:37.509 ************ 2025-05-04 01:14:30.506449 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.506463 | orchestrator | 2025-05-04 01:14:30.506477 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-04 01:14:30.506491 | orchestrator | Sunday 04 May 2025 01:07:41 +0000 (0:00:00.933) 0:01:38.442 ************ 2025-05-04 01:14:30.506505 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.506519 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.506533 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.506576 | orchestrator | 2025-05-04 01:14:30.506592 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-04 01:14:30.506606 | orchestrator | Sunday 04 May 2025 01:07:43 +0000 (0:00:02.393) 0:01:40.835 ************ 2025-05-04 01:14:30.506620 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.506634 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.506648 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.506662 | orchestrator | 2025-05-04 01:14:30.506676 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-04 01:14:30.506690 | orchestrator | Sunday 04 May 2025 01:07:46 +0000 (0:00:02.316) 0:01:43.152 ************ 2025-05-04 01:14:30.506704 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.506718 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.506732 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.506746 | orchestrator | 2025-05-04 01:14:30.506760 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-04 01:14:30.506774 | orchestrator | Sunday 04 May 2025 01:07:46 +0000 (0:00:00.503) 0:01:43.655 ************ 2025-05-04 01:14:30.506789 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-04 01:14:30.506805 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.506819 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-04 01:14:30.506834 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.506848 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-04 01:14:30.506862 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-04 01:14:30.506876 | orchestrator | 2025-05-04 01:14:30.506890 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-04 01:14:30.506904 | orchestrator | Sunday 04 May 2025 01:07:54 +0000 (0:00:08.001) 0:01:51.657 ************ 2025-05-04 01:14:30.506918 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.506932 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.506946 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.506960 | orchestrator | 2025-05-04 01:14:30.506974 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-04 01:14:30.506994 | orchestrator | Sunday 04 May 2025 01:07:55 +0000 (0:00:00.633) 0:01:52.291 ************ 2025-05-04 01:14:30.507008 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-04 01:14:30.507051 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.507066 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-04 01:14:30.507080 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507094 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-04 01:14:30.507120 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507134 | orchestrator | 2025-05-04 01:14:30.507148 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-04 01:14:30.507170 | orchestrator | Sunday 04 May 2025 01:07:56 +0000 (0:00:01.426) 0:01:53.717 ************ 2025-05-04 01:14:30.507185 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507285 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507301 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.507349 | orchestrator | 2025-05-04 01:14:30.507363 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-04 01:14:30.507378 | orchestrator | Sunday 04 May 2025 01:07:57 +0000 (0:00:00.542) 0:01:54.260 ************ 2025-05-04 01:14:30.507392 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507406 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507432 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.507447 | orchestrator | 2025-05-04 01:14:30.507461 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-04 01:14:30.507475 | orchestrator | Sunday 04 May 2025 01:07:58 +0000 (0:00:01.163) 0:01:55.423 ************ 2025-05-04 01:14:30.507489 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507510 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507525 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.507539 | orchestrator | 2025-05-04 01:14:30.507611 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-04 01:14:30.507626 | orchestrator | Sunday 04 May 2025 01:08:01 +0000 (0:00:02.762) 0:01:58.186 ************ 2025-05-04 01:14:30.507641 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507655 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507670 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.507694 | orchestrator | 2025-05-04 01:14:30.507717 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-04 01:14:30.507737 | orchestrator | Sunday 04 May 2025 01:08:20 +0000 (0:00:19.639) 0:02:17.825 ************ 2025-05-04 01:14:30.507791 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507807 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507819 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.507832 | orchestrator | 2025-05-04 01:14:30.507845 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-04 01:14:30.507857 | orchestrator | Sunday 04 May 2025 01:08:31 +0000 (0:00:10.747) 0:02:28.573 ************ 2025-05-04 01:14:30.507870 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.507883 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507896 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507908 | orchestrator | 2025-05-04 01:14:30.507920 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-04 01:14:30.507933 | orchestrator | Sunday 04 May 2025 01:08:32 +0000 (0:00:01.249) 0:02:29.822 ************ 2025-05-04 01:14:30.507945 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.507958 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.507978 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.507990 | orchestrator | 2025-05-04 01:14:30.508003 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-04 01:14:30.508015 | orchestrator | Sunday 04 May 2025 01:08:44 +0000 (0:00:11.196) 0:02:41.019 ************ 2025-05-04 01:14:30.508028 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.508041 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.508053 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.508065 | orchestrator | 2025-05-04 01:14:30.508078 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-04 01:14:30.508090 | orchestrator | Sunday 04 May 2025 01:08:45 +0000 (0:00:01.511) 0:02:42.530 ************ 2025-05-04 01:14:30.508102 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.508115 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.508127 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.508139 | orchestrator | 2025-05-04 01:14:30.508152 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-04 01:14:30.508165 | orchestrator | 2025-05-04 01:14:30.508186 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-04 01:14:30.508199 | orchestrator | Sunday 04 May 2025 01:08:46 +0000 (0:00:00.509) 0:02:43.040 ************ 2025-05-04 01:14:30.508211 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.508260 | orchestrator | 2025-05-04 01:14:30.508399 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-04 01:14:30.508413 | orchestrator | Sunday 04 May 2025 01:08:46 +0000 (0:00:00.822) 0:02:43.863 ************ 2025-05-04 01:14:30.508426 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-04 01:14:30.508439 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-04 01:14:30.508451 | orchestrator | 2025-05-04 01:14:30.508464 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-04 01:14:30.508477 | orchestrator | Sunday 04 May 2025 01:08:50 +0000 (0:00:03.146) 0:02:47.009 ************ 2025-05-04 01:14:30.508490 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-04 01:14:30.508504 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-04 01:14:30.508516 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-04 01:14:30.508530 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-04 01:14:30.508542 | orchestrator | 2025-05-04 01:14:30.508583 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-04 01:14:30.508596 | orchestrator | Sunday 04 May 2025 01:08:56 +0000 (0:00:06.434) 0:02:53.443 ************ 2025-05-04 01:14:30.508609 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-04 01:14:30.508621 | orchestrator | 2025-05-04 01:14:30.508634 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-04 01:14:30.508646 | orchestrator | Sunday 04 May 2025 01:08:59 +0000 (0:00:03.196) 0:02:56.640 ************ 2025-05-04 01:14:30.508659 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-04 01:14:30.508671 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-04 01:14:30.508684 | orchestrator | 2025-05-04 01:14:30.508697 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-04 01:14:30.508709 | orchestrator | Sunday 04 May 2025 01:09:03 +0000 (0:00:04.237) 0:03:00.877 ************ 2025-05-04 01:14:30.508722 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-04 01:14:30.508734 | orchestrator | 2025-05-04 01:14:30.508753 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-04 01:14:30.508766 | orchestrator | Sunday 04 May 2025 01:09:07 +0000 (0:00:03.305) 0:03:04.183 ************ 2025-05-04 01:14:30.508779 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-04 01:14:30.508791 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-04 01:14:30.508804 | orchestrator | 2025-05-04 01:14:30.508816 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-04 01:14:30.508837 | orchestrator | Sunday 04 May 2025 01:09:15 +0000 (0:00:07.998) 0:03:12.182 ************ 2025-05-04 01:14:30.508853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.508917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.508932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.508946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.508972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.508994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.509041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.509064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.509085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.509104 | orchestrator | 2025-05-04 01:14:30.509122 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-04 01:14:30.509141 | orchestrator | Sunday 04 May 2025 01:09:16 +0000 (0:00:01.666) 0:03:13.848 ************ 2025-05-04 01:14:30.509160 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.509179 | orchestrator | 2025-05-04 01:14:30.509198 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-04 01:14:30.509218 | orchestrator | Sunday 04 May 2025 01:09:17 +0000 (0:00:00.131) 0:03:13.980 ************ 2025-05-04 01:14:30.509238 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.509258 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.509276 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.509296 | orchestrator | 2025-05-04 01:14:30.509316 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-04 01:14:30.509334 | orchestrator | Sunday 04 May 2025 01:09:17 +0000 (0:00:00.478) 0:03:14.458 ************ 2025-05-04 01:14:30.509354 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-04 01:14:30.509372 | orchestrator | 2025-05-04 01:14:30.509403 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-04 01:14:30.509438 | orchestrator | Sunday 04 May 2025 01:09:17 +0000 (0:00:00.393) 0:03:14.852 ************ 2025-05-04 01:14:30.509459 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.509481 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.509499 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.509535 | orchestrator | 2025-05-04 01:14:30.509628 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-04 01:14:30.509644 | orchestrator | Sunday 04 May 2025 01:09:18 +0000 (0:00:00.291) 0:03:15.143 ************ 2025-05-04 01:14:30.509671 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.509685 | orchestrator | 2025-05-04 01:14:30.509698 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-04 01:14:30.509711 | orchestrator | Sunday 04 May 2025 01:09:19 +0000 (0:00:00.828) 0:03:15.972 ************ 2025-05-04 01:14:30.509733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.509789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.509826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.509864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.509884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.509910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.509924 | orchestrator | 2025-05-04 01:14:30.509935 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-04 01:14:30.509945 | orchestrator | Sunday 04 May 2025 01:09:21 +0000 (0:00:02.550) 0:03:18.523 ************ 2025-05-04 01:14:30.509956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.509972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.509989 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.509999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510080 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.510091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510118 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.510129 | orchestrator | 2025-05-04 01:14:30.510143 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-04 01:14:30.510161 | orchestrator | Sunday 04 May 2025 01:09:22 +0000 (0:00:00.780) 0:03:19.304 ************ 2025-05-04 01:14:30.510212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510256 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.510269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510298 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.510327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510351 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.510361 | orchestrator | 2025-05-04 01:14:30.510372 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-04 01:14:30.510382 | orchestrator | Sunday 04 May 2025 01:09:23 +0000 (0:00:01.174) 0:03:20.478 ************ 2025-05-04 01:14:30.510393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.510410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.510434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.510446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.510457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.510485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.510520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510531 | orchestrator | 2025-05-04 01:14:30.510542 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-04 01:14:30.510573 | orchestrator | Sunday 04 May 2025 01:09:26 +0000 (0:00:02.687) 0:03:23.166 ************ 2025-05-04 01:14:30.510584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.510595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.510628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.510640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.510651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.510683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.510720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510731 | orchestrator | 2025-05-04 01:14:30.510742 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-04 01:14:30.510753 | orchestrator | Sunday 04 May 2025 01:09:32 +0000 (0:00:06.435) 0:03:29.601 ************ 2025-05-04 01:14:30.510764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510802 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.510813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510870 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.510881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-04 01:14:30.510897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.510919 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.510929 | orchestrator | 2025-05-04 01:14:30.510940 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-04 01:14:30.510950 | orchestrator | Sunday 04 May 2025 01:09:33 +0000 (0:00:00.802) 0:03:30.403 ************ 2025-05-04 01:14:30.510960 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.510971 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.510981 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.510991 | orchestrator | 2025-05-04 01:14:30.511002 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-04 01:14:30.511012 | orchestrator | Sunday 04 May 2025 01:09:35 +0000 (0:00:01.745) 0:03:32.149 ************ 2025-05-04 01:14:30.511026 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.511037 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.511048 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.511058 | orchestrator | 2025-05-04 01:14:30.511069 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-04 01:14:30.511084 | orchestrator | Sunday 04 May 2025 01:09:35 +0000 (0:00:00.497) 0:03:32.647 ************ 2025-05-04 01:14:30.511102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.511119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.511130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-04 01:14:30.511154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.511166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.511177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.511193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.511204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.511256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.511269 | orchestrator | 2025-05-04 01:14:30.511280 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-04 01:14:30.511290 | orchestrator | Sunday 04 May 2025 01:09:37 +0000 (0:00:02.176) 0:03:34.823 ************ 2025-05-04 01:14:30.511301 | orchestrator | 2025-05-04 01:14:30.511311 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-04 01:14:30.511321 | orchestrator | Sunday 04 May 2025 01:09:38 +0000 (0:00:00.324) 0:03:35.148 ************ 2025-05-04 01:14:30.511332 | orchestrator | 2025-05-04 01:14:30.511342 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-04 01:14:30.511353 | orchestrator | Sunday 04 May 2025 01:09:38 +0000 (0:00:00.106) 0:03:35.254 ************ 2025-05-04 01:14:30.511363 | orchestrator | 2025-05-04 01:14:30.511378 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-04 01:14:30.511388 | orchestrator | Sunday 04 May 2025 01:09:38 +0000 (0:00:00.292) 0:03:35.547 ************ 2025-05-04 01:14:30.511398 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.511409 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.511419 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.511429 | orchestrator | 2025-05-04 01:14:30.511439 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-04 01:14:30.511450 | orchestrator | Sunday 04 May 2025 01:09:54 +0000 (0:00:16.139) 0:03:51.686 ************ 2025-05-04 01:14:30.511460 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.511471 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.511481 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.511497 | orchestrator | 2025-05-04 01:14:30.511508 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-04 01:14:30.511518 | orchestrator | 2025-05-04 01:14:30.511528 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-04 01:14:30.511538 | orchestrator | Sunday 04 May 2025 01:10:05 +0000 (0:00:11.035) 0:04:02.722 ************ 2025-05-04 01:14:30.511567 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.511580 | orchestrator | 2025-05-04 01:14:30.511590 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-04 01:14:30.511600 | orchestrator | Sunday 04 May 2025 01:10:07 +0000 (0:00:01.497) 0:04:04.220 ************ 2025-05-04 01:14:30.511610 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.511636 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.511658 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.511670 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.511680 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.511709 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.511720 | orchestrator | 2025-05-04 01:14:30.511730 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-04 01:14:30.511741 | orchestrator | Sunday 04 May 2025 01:10:08 +0000 (0:00:00.759) 0:04:04.980 ************ 2025-05-04 01:14:30.511751 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.511761 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.511771 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.511782 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:14:30.511835 | orchestrator | 2025-05-04 01:14:30.511846 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-04 01:14:30.511856 | orchestrator | Sunday 04 May 2025 01:10:09 +0000 (0:00:01.319) 0:04:06.300 ************ 2025-05-04 01:14:30.511867 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-04 01:14:30.511877 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-04 01:14:30.511888 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-04 01:14:30.511898 | orchestrator | 2025-05-04 01:14:30.511909 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-04 01:14:30.511919 | orchestrator | Sunday 04 May 2025 01:10:10 +0000 (0:00:00.663) 0:04:06.963 ************ 2025-05-04 01:14:30.511929 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-04 01:14:30.511940 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-04 01:14:30.511950 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-04 01:14:30.511961 | orchestrator | 2025-05-04 01:14:30.511971 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-04 01:14:30.511981 | orchestrator | Sunday 04 May 2025 01:10:11 +0000 (0:00:01.403) 0:04:08.367 ************ 2025-05-04 01:14:30.511992 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-04 01:14:30.512002 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.512013 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-04 01:14:30.512023 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.512033 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-04 01:14:30.512044 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.512054 | orchestrator | 2025-05-04 01:14:30.512064 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-04 01:14:30.512074 | orchestrator | Sunday 04 May 2025 01:10:12 +0000 (0:00:00.908) 0:04:09.276 ************ 2025-05-04 01:14:30.512085 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-04 01:14:30.512095 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-04 01:14:30.512105 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.512116 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-04 01:14:30.512132 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-04 01:14:30.512143 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-04 01:14:30.512158 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-04 01:14:30.512169 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-04 01:14:30.512179 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.512190 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-04 01:14:30.512200 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-04 01:14:30.512210 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.512225 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-04 01:14:30.512236 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-04 01:14:30.512246 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-04 01:14:30.512256 | orchestrator | 2025-05-04 01:14:30.512271 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-04 01:14:30.512282 | orchestrator | Sunday 04 May 2025 01:10:13 +0000 (0:00:01.032) 0:04:10.309 ************ 2025-05-04 01:14:30.512293 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.512303 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.512314 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.512324 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.512334 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.512349 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.512359 | orchestrator | 2025-05-04 01:14:30.512370 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-04 01:14:30.512380 | orchestrator | Sunday 04 May 2025 01:10:14 +0000 (0:00:01.219) 0:04:11.528 ************ 2025-05-04 01:14:30.512391 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.512401 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.512411 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.512422 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.512432 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.512442 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.512452 | orchestrator | 2025-05-04 01:14:30.512463 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-04 01:14:30.512473 | orchestrator | Sunday 04 May 2025 01:10:16 +0000 (0:00:01.858) 0:04:13.387 ************ 2025-05-04 01:14:30.512484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.512610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.512622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.512678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.512704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.512737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.512754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.512767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.512776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.512786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.512801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.512849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.512859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.512891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.512918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.512950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.512973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.512982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.512996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.513006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513138 | orchestrator | 2025-05-04 01:14:30.513147 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-04 01:14:30.513156 | orchestrator | Sunday 04 May 2025 01:10:19 +0000 (0:00:02.553) 0:04:15.940 ************ 2025-05-04 01:14:30.513165 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-04 01:14:30.513196 | orchestrator | 2025-05-04 01:14:30.513206 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-04 01:14:30.513215 | orchestrator | Sunday 04 May 2025 01:10:20 +0000 (0:00:01.598) 0:04:17.539 ************ 2025-05-04 01:14:30.513239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.513423 | orchestrator | 2025-05-04 01:14:30.513432 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-04 01:14:30.513441 | orchestrator | Sunday 04 May 2025 01:10:24 +0000 (0:00:03.943) 0:04:21.483 ************ 2025-05-04 01:14:30.513458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.513468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.513481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513879 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.513904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.513914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.513937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.513947 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.513956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.513975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.513994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514003 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.514044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.514056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514075 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.514084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.514093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514102 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.514126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.514137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514152 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.514168 | orchestrator | 2025-05-04 01:14:30.514182 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-04 01:14:30.514197 | orchestrator | Sunday 04 May 2025 01:10:26 +0000 (0:00:01.975) 0:04:23.458 ************ 2025-05-04 01:14:30.514212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.514239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.514256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.514300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.514309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514318 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.514348 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.514370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.514380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.514389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514404 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.514420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.514429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514438 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.514447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.514456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514473 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.514482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.514492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.514507 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.514517 | orchestrator | 2025-05-04 01:14:30.514527 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-04 01:14:30.514537 | orchestrator | Sunday 04 May 2025 01:10:29 +0000 (0:00:02.521) 0:04:25.979 ************ 2025-05-04 01:14:30.514566 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.514577 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.514586 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.514596 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-04 01:14:30.514607 | orchestrator | 2025-05-04 01:14:30.514617 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-04 01:14:30.514626 | orchestrator | Sunday 04 May 2025 01:10:30 +0000 (0:00:01.221) 0:04:27.201 ************ 2025-05-04 01:14:30.514640 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-04 01:14:30.514650 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-04 01:14:30.514660 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-04 01:14:30.514670 | orchestrator | 2025-05-04 01:14:30.514679 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-04 01:14:30.514689 | orchestrator | Sunday 04 May 2025 01:10:31 +0000 (0:00:00.892) 0:04:28.093 ************ 2025-05-04 01:14:30.514698 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-04 01:14:30.514708 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-04 01:14:30.514718 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-04 01:14:30.514728 | orchestrator | 2025-05-04 01:14:30.514737 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-04 01:14:30.514747 | orchestrator | Sunday 04 May 2025 01:10:32 +0000 (0:00:00.870) 0:04:28.964 ************ 2025-05-04 01:14:30.514756 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:14:30.514766 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:14:30.514776 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:14:30.514786 | orchestrator | 2025-05-04 01:14:30.514796 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-04 01:14:30.514805 | orchestrator | Sunday 04 May 2025 01:10:32 +0000 (0:00:00.910) 0:04:29.875 ************ 2025-05-04 01:14:30.514815 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:14:30.514825 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:14:30.514835 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:14:30.514845 | orchestrator | 2025-05-04 01:14:30.514854 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-04 01:14:30.514867 | orchestrator | Sunday 04 May 2025 01:10:33 +0000 (0:00:00.337) 0:04:30.213 ************ 2025-05-04 01:14:30.514876 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-04 01:14:30.514885 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-04 01:14:30.514893 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-04 01:14:30.514902 | orchestrator | 2025-05-04 01:14:30.514911 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-04 01:14:30.514920 | orchestrator | Sunday 04 May 2025 01:10:34 +0000 (0:00:01.407) 0:04:31.620 ************ 2025-05-04 01:14:30.514929 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-04 01:14:30.514938 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-04 01:14:30.514946 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-04 01:14:30.514961 | orchestrator | 2025-05-04 01:14:30.514970 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-04 01:14:30.514978 | orchestrator | Sunday 04 May 2025 01:10:36 +0000 (0:00:01.414) 0:04:33.035 ************ 2025-05-04 01:14:30.514987 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-04 01:14:30.514996 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-04 01:14:30.515005 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-04 01:14:30.515013 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-04 01:14:30.515025 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-04 01:14:30.515034 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-04 01:14:30.515043 | orchestrator | 2025-05-04 01:14:30.515052 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-04 01:14:30.515061 | orchestrator | Sunday 04 May 2025 01:10:41 +0000 (0:00:05.578) 0:04:38.613 ************ 2025-05-04 01:14:30.515069 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.515078 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.515087 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.515096 | orchestrator | 2025-05-04 01:14:30.515104 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-04 01:14:30.515113 | orchestrator | Sunday 04 May 2025 01:10:42 +0000 (0:00:00.306) 0:04:38.920 ************ 2025-05-04 01:14:30.515122 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.515131 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.515140 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.515148 | orchestrator | 2025-05-04 01:14:30.515157 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-04 01:14:30.515166 | orchestrator | Sunday 04 May 2025 01:10:42 +0000 (0:00:00.488) 0:04:39.409 ************ 2025-05-04 01:14:30.515175 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.515183 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.515192 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.515201 | orchestrator | 2025-05-04 01:14:30.515209 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-04 01:14:30.515218 | orchestrator | Sunday 04 May 2025 01:10:44 +0000 (0:00:01.547) 0:04:40.956 ************ 2025-05-04 01:14:30.515228 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-04 01:14:30.515241 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-04 01:14:30.515250 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-04 01:14:30.515259 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-04 01:14:30.515268 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-04 01:14:30.515277 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-04 01:14:30.515286 | orchestrator | 2025-05-04 01:14:30.515295 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-04 01:14:30.515307 | orchestrator | Sunday 04 May 2025 01:10:47 +0000 (0:00:03.542) 0:04:44.499 ************ 2025-05-04 01:14:30.515316 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-04 01:14:30.515325 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-04 01:14:30.515334 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-04 01:14:30.515343 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-04 01:14:30.515352 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.515360 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-04 01:14:30.515374 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.515383 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-04 01:14:30.515392 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.515401 | orchestrator | 2025-05-04 01:14:30.515410 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-04 01:14:30.515419 | orchestrator | Sunday 04 May 2025 01:10:51 +0000 (0:00:03.487) 0:04:47.987 ************ 2025-05-04 01:14:30.515428 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.515443 | orchestrator | 2025-05-04 01:14:30.515452 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-04 01:14:30.515461 | orchestrator | Sunday 04 May 2025 01:10:51 +0000 (0:00:00.132) 0:04:48.120 ************ 2025-05-04 01:14:30.515470 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.515480 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.515489 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.515498 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.515507 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.515515 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.515524 | orchestrator | 2025-05-04 01:14:30.515533 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-04 01:14:30.515542 | orchestrator | Sunday 04 May 2025 01:10:52 +0000 (0:00:01.033) 0:04:49.153 ************ 2025-05-04 01:14:30.515642 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-04 01:14:30.515651 | orchestrator | 2025-05-04 01:14:30.515660 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-04 01:14:30.515669 | orchestrator | Sunday 04 May 2025 01:10:52 +0000 (0:00:00.399) 0:04:49.553 ************ 2025-05-04 01:14:30.515678 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.515686 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.515695 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.515704 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.515713 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.515721 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.515730 | orchestrator | 2025-05-04 01:14:30.515739 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-04 01:14:30.515747 | orchestrator | Sunday 04 May 2025 01:10:53 +0000 (0:00:00.965) 0:04:50.518 ************ 2025-05-04 01:14:30.515757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.515766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.515790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.515807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.515816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.515826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.515842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.515852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.515870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.515880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.515889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.515898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.515908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.515917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.515949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.515959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.515969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.515978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.515987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.515996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516325 | orchestrator | 2025-05-04 01:14:30.516334 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-04 01:14:30.516342 | orchestrator | Sunday 04 May 2025 01:10:57 +0000 (0:00:03.903) 0:04:54.422 ************ 2025-05-04 01:14:30.516357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.516366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.516379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.516434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.516443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.516619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.516635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.516719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.516729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.516742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.516751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.516759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.516785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.516957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.516968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.516977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.516991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.517007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.517016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.517025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.517052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.517063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.517076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.517092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.517102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.517111 | orchestrator | 2025-05-04 01:14:30.517119 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-04 01:14:30.517128 | orchestrator | Sunday 04 May 2025 01:11:05 +0000 (0:00:07.857) 0:05:02.279 ************ 2025-05-04 01:14:30.517138 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.517147 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.517156 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.517164 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.517173 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.517182 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.517191 | orchestrator | 2025-05-04 01:14:30.517200 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-04 01:14:30.517209 | orchestrator | Sunday 04 May 2025 01:11:07 +0000 (0:00:01.846) 0:05:04.126 ************ 2025-05-04 01:14:30.517218 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-04 01:14:30.517247 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-04 01:14:30.517258 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-04 01:14:30.517268 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-04 01:14:30.517277 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.517305 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-04 01:14:30.517322 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-04 01:14:30.517331 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.517341 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-04 01:14:30.517351 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-04 01:14:30.517361 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.517371 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-04 01:14:30.517380 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-04 01:14:30.517390 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-04 01:14:30.517399 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-04 01:14:30.517409 | orchestrator | 2025-05-04 01:14:30.517418 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-04 01:14:30.517428 | orchestrator | Sunday 04 May 2025 01:11:12 +0000 (0:00:05.327) 0:05:09.453 ************ 2025-05-04 01:14:30.517437 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.517447 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.517456 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.517466 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.517475 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.517485 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.517494 | orchestrator | 2025-05-04 01:14:30.517504 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-04 01:14:30.517514 | orchestrator | Sunday 04 May 2025 01:11:13 +0000 (0:00:01.044) 0:05:10.499 ************ 2025-05-04 01:14:30.517523 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-04 01:14:30.517536 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-04 01:14:30.517618 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-04 01:14:30.517634 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-04 01:14:30.517644 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-04 01:14:30.517653 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-04 01:14:30.517661 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-04 01:14:30.517669 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-04 01:14:30.517677 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-04 01:14:30.517685 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-04 01:14:30.517693 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.517701 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-04 01:14:30.517709 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.517718 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-04 01:14:30.517726 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.517734 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-04 01:14:30.517742 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-04 01:14:30.517758 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-04 01:14:30.517767 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-04 01:14:30.517775 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-04 01:14:30.517783 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-04 01:14:30.517791 | orchestrator | 2025-05-04 01:14:30.517799 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-04 01:14:30.517807 | orchestrator | Sunday 04 May 2025 01:11:21 +0000 (0:00:07.762) 0:05:18.261 ************ 2025-05-04 01:14:30.517816 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-04 01:14:30.517824 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-04 01:14:30.517855 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-04 01:14:30.517865 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-04 01:14:30.517873 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-04 01:14:30.517881 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-04 01:14:30.517890 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-04 01:14:30.517898 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-04 01:14:30.517906 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-04 01:14:30.517914 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-04 01:14:30.517922 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-04 01:14:30.517934 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-04 01:14:30.517943 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.517951 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-04 01:14:30.517959 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-04 01:14:30.517967 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.517975 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-04 01:14:30.517984 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.517992 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-04 01:14:30.518000 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-04 01:14:30.518008 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-04 01:14:30.518039 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-04 01:14:30.518048 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-04 01:14:30.518057 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-04 01:14:30.518065 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-04 01:14:30.518073 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-04 01:14:30.518081 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-04 01:14:30.518089 | orchestrator | 2025-05-04 01:14:30.518102 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-04 01:14:30.518110 | orchestrator | Sunday 04 May 2025 01:11:31 +0000 (0:00:10.223) 0:05:28.484 ************ 2025-05-04 01:14:30.518118 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.518125 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.518132 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.518142 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.518153 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.518164 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.518175 | orchestrator | 2025-05-04 01:14:30.518186 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-04 01:14:30.518196 | orchestrator | Sunday 04 May 2025 01:11:32 +0000 (0:00:00.664) 0:05:29.148 ************ 2025-05-04 01:14:30.518208 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.518220 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.518233 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.518241 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.518248 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.518255 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.518262 | orchestrator | 2025-05-04 01:14:30.518270 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-04 01:14:30.518280 | orchestrator | Sunday 04 May 2025 01:11:33 +0000 (0:00:01.073) 0:05:30.222 ************ 2025-05-04 01:14:30.518288 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.518295 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.518302 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.518309 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.518334 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.518342 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.518348 | orchestrator | 2025-05-04 01:14:30.518355 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-04 01:14:30.518363 | orchestrator | Sunday 04 May 2025 01:11:36 +0000 (0:00:02.876) 0:05:33.098 ************ 2025-05-04 01:14:30.518393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.518403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.518420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.518449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518489 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.518504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.518516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.518523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.518584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.518624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518631 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.518639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.518646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.518678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518708 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.518715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.518726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.518734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.518768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518791 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.518803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.518823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.518831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.518856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518889 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.518912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.518921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.518928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.518946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.518957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.518987 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.518994 | orchestrator | 2025-05-04 01:14:30.519001 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-04 01:14:30.519009 | orchestrator | Sunday 04 May 2025 01:11:38 +0000 (0:00:02.209) 0:05:35.308 ************ 2025-05-04 01:14:30.519016 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-04 01:14:30.519024 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-04 01:14:30.519031 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.519038 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-04 01:14:30.519045 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-04 01:14:30.519052 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.519060 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-04 01:14:30.519067 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-04 01:14:30.519074 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.519081 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-04 01:14:30.519088 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-04 01:14:30.519095 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.519102 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-04 01:14:30.519109 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-04 01:14:30.519116 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.519123 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-04 01:14:30.519135 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-04 01:14:30.519142 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.519149 | orchestrator | 2025-05-04 01:14:30.519156 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-04 01:14:30.519163 | orchestrator | Sunday 04 May 2025 01:11:39 +0000 (0:00:00.899) 0:05:36.207 ************ 2025-05-04 01:14:30.519174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.519204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.519212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.519227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.519234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-04 01:14:30.519242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-04 01:14:30.519255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.519324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.519336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.519383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.519419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.519445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-04 01:14:30.519471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-04 01:14:30.519479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-04 01:14:30.519659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-04 01:14:30.519671 | orchestrator | 2025-05-04 01:14:30.519679 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-04 01:14:30.519686 | orchestrator | Sunday 04 May 2025 01:11:42 +0000 (0:00:03.557) 0:05:39.764 ************ 2025-05-04 01:14:30.519693 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.519700 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.519707 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.519714 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.519722 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.519729 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.519736 | orchestrator | 2025-05-04 01:14:30.519743 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-04 01:14:30.519750 | orchestrator | Sunday 04 May 2025 01:11:43 +0000 (0:00:00.764) 0:05:40.528 ************ 2025-05-04 01:14:30.519757 | orchestrator | 2025-05-04 01:14:30.519764 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-04 01:14:30.519771 | orchestrator | Sunday 04 May 2025 01:11:43 +0000 (0:00:00.332) 0:05:40.860 ************ 2025-05-04 01:14:30.519778 | orchestrator | 2025-05-04 01:14:30.519785 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-04 01:14:30.519793 | orchestrator | Sunday 04 May 2025 01:11:44 +0000 (0:00:00.111) 0:05:40.972 ************ 2025-05-04 01:14:30.519800 | orchestrator | 2025-05-04 01:14:30.519807 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-04 01:14:30.519814 | orchestrator | Sunday 04 May 2025 01:11:44 +0000 (0:00:00.352) 0:05:41.324 ************ 2025-05-04 01:14:30.519821 | orchestrator | 2025-05-04 01:14:30.519828 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-04 01:14:30.519835 | orchestrator | Sunday 04 May 2025 01:11:44 +0000 (0:00:00.116) 0:05:41.440 ************ 2025-05-04 01:14:30.519842 | orchestrator | 2025-05-04 01:14:30.519849 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-04 01:14:30.519856 | orchestrator | Sunday 04 May 2025 01:11:44 +0000 (0:00:00.333) 0:05:41.774 ************ 2025-05-04 01:14:30.519863 | orchestrator | 2025-05-04 01:14:30.519870 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-04 01:14:30.519878 | orchestrator | Sunday 04 May 2025 01:11:45 +0000 (0:00:00.115) 0:05:41.889 ************ 2025-05-04 01:14:30.519885 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.519892 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.519899 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.519906 | orchestrator | 2025-05-04 01:14:30.519913 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-04 01:14:30.519920 | orchestrator | Sunday 04 May 2025 01:11:52 +0000 (0:00:07.706) 0:05:49.596 ************ 2025-05-04 01:14:30.519927 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.519935 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.519942 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.519949 | orchestrator | 2025-05-04 01:14:30.519956 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-04 01:14:30.519963 | orchestrator | Sunday 04 May 2025 01:12:08 +0000 (0:00:16.191) 0:06:05.787 ************ 2025-05-04 01:14:30.519972 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.519980 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.519987 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.519994 | orchestrator | 2025-05-04 01:14:30.520001 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-04 01:14:30.520009 | orchestrator | Sunday 04 May 2025 01:12:30 +0000 (0:00:21.361) 0:06:27.148 ************ 2025-05-04 01:14:30.520016 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.520026 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.520034 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.520041 | orchestrator | 2025-05-04 01:14:30.520048 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-04 01:14:30.520055 | orchestrator | Sunday 04 May 2025 01:12:59 +0000 (0:00:28.992) 0:06:56.141 ************ 2025-05-04 01:14:30.520062 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.520069 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.520076 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.520084 | orchestrator | 2025-05-04 01:14:30.520091 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-04 01:14:30.520101 | orchestrator | Sunday 04 May 2025 01:13:00 +0000 (0:00:01.233) 0:06:57.375 ************ 2025-05-04 01:14:30.520108 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.520115 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.520122 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.520129 | orchestrator | 2025-05-04 01:14:30.520136 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-04 01:14:30.520143 | orchestrator | Sunday 04 May 2025 01:13:01 +0000 (0:00:00.795) 0:06:58.170 ************ 2025-05-04 01:14:30.520151 | orchestrator | changed: [testbed-node-5] 2025-05-04 01:14:30.520158 | orchestrator | changed: [testbed-node-4] 2025-05-04 01:14:30.520165 | orchestrator | changed: [testbed-node-3] 2025-05-04 01:14:30.520172 | orchestrator | 2025-05-04 01:14:30.520179 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-04 01:14:30.520186 | orchestrator | Sunday 04 May 2025 01:13:23 +0000 (0:00:22.481) 0:07:20.652 ************ 2025-05-04 01:14:30.520193 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.520200 | orchestrator | 2025-05-04 01:14:30.520207 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-04 01:14:30.520214 | orchestrator | Sunday 04 May 2025 01:13:23 +0000 (0:00:00.132) 0:07:20.784 ************ 2025-05-04 01:14:30.520222 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.520229 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.520236 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.520243 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.520250 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.520257 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-04 01:14:30.520264 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 01:14:30.520272 | orchestrator | 2025-05-04 01:14:30.520279 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-04 01:14:30.520286 | orchestrator | Sunday 04 May 2025 01:13:45 +0000 (0:00:21.811) 0:07:42.595 ************ 2025-05-04 01:14:30.520293 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.520300 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.520307 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.520314 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.520325 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.520332 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.520339 | orchestrator | 2025-05-04 01:14:30.520346 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-04 01:14:30.520353 | orchestrator | Sunday 04 May 2025 01:13:55 +0000 (0:00:09.492) 0:07:52.088 ************ 2025-05-04 01:14:30.520360 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.520367 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.520374 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.520381 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.520388 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.520395 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-04 01:14:30.520402 | orchestrator | 2025-05-04 01:14:30.520415 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-04 01:14:30.520422 | orchestrator | Sunday 04 May 2025 01:13:58 +0000 (0:00:03.482) 0:07:55.571 ************ 2025-05-04 01:14:30.520429 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 01:14:30.520436 | orchestrator | 2025-05-04 01:14:30.520443 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-04 01:14:30.520450 | orchestrator | Sunday 04 May 2025 01:14:08 +0000 (0:00:10.278) 0:08:05.849 ************ 2025-05-04 01:14:30.520458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 01:14:30.520465 | orchestrator | 2025-05-04 01:14:30.520472 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-04 01:14:30.520479 | orchestrator | Sunday 04 May 2025 01:14:10 +0000 (0:00:01.192) 0:08:07.041 ************ 2025-05-04 01:14:30.520486 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.520493 | orchestrator | 2025-05-04 01:14:30.520500 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-04 01:14:30.520507 | orchestrator | Sunday 04 May 2025 01:14:11 +0000 (0:00:01.561) 0:08:08.603 ************ 2025-05-04 01:14:30.520514 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-04 01:14:30.520521 | orchestrator | 2025-05-04 01:14:30.520528 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-04 01:14:30.520535 | orchestrator | Sunday 04 May 2025 01:14:20 +0000 (0:00:09.197) 0:08:17.800 ************ 2025-05-04 01:14:30.520556 | orchestrator | ok: [testbed-node-3] 2025-05-04 01:14:30.520565 | orchestrator | ok: [testbed-node-4] 2025-05-04 01:14:30.520572 | orchestrator | ok: [testbed-node-5] 2025-05-04 01:14:30.520580 | orchestrator | ok: [testbed-node-0] 2025-05-04 01:14:30.520587 | orchestrator | ok: [testbed-node-1] 2025-05-04 01:14:30.520594 | orchestrator | ok: [testbed-node-2] 2025-05-04 01:14:30.520601 | orchestrator | 2025-05-04 01:14:30.520610 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-04 01:14:30.520618 | orchestrator | 2025-05-04 01:14:30.520625 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-04 01:14:30.520632 | orchestrator | Sunday 04 May 2025 01:14:23 +0000 (0:00:02.258) 0:08:20.059 ************ 2025-05-04 01:14:30.520639 | orchestrator | changed: [testbed-node-0] 2025-05-04 01:14:30.520646 | orchestrator | changed: [testbed-node-1] 2025-05-04 01:14:30.520653 | orchestrator | changed: [testbed-node-2] 2025-05-04 01:14:30.520660 | orchestrator | 2025-05-04 01:14:30.520667 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-04 01:14:30.520674 | orchestrator | 2025-05-04 01:14:30.520681 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-04 01:14:30.520688 | orchestrator | Sunday 04 May 2025 01:14:24 +0000 (0:00:01.051) 0:08:21.110 ************ 2025-05-04 01:14:30.520695 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.520702 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.520709 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.520716 | orchestrator | 2025-05-04 01:14:30.520723 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-04 01:14:30.520730 | orchestrator | 2025-05-04 01:14:30.520737 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-04 01:14:30.520745 | orchestrator | Sunday 04 May 2025 01:14:25 +0000 (0:00:00.838) 0:08:21.949 ************ 2025-05-04 01:14:30.520752 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-04 01:14:30.520759 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-04 01:14:30.520766 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-04 01:14:30.520773 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-04 01:14:30.520780 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-04 01:14:30.520787 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-04 01:14:30.520793 | orchestrator | skipping: [testbed-node-3] 2025-05-04 01:14:30.520805 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-04 01:14:30.520812 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-04 01:14:30.520819 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-04 01:14:30.520826 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-04 01:14:30.520833 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-04 01:14:30.520840 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-04 01:14:30.520847 | orchestrator | skipping: [testbed-node-4] 2025-05-04 01:14:30.520854 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-04 01:14:30.520861 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-04 01:14:30.520868 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-04 01:14:30.520879 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-04 01:14:30.520886 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-04 01:14:30.520893 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-04 01:14:30.520900 | orchestrator | skipping: [testbed-node-5] 2025-05-04 01:14:30.520907 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-04 01:14:30.520914 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-04 01:14:30.520921 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-04 01:14:30.520928 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-04 01:14:30.520935 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-04 01:14:30.520942 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-04 01:14:30.520949 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-04 01:14:30.520956 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-04 01:14:30.520963 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-04 01:14:30.520970 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-04 01:14:30.520977 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-04 01:14:30.520984 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-04 01:14:30.520991 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.520998 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.521006 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-04 01:14:30.521013 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-04 01:14:30.521020 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-04 01:14:30.521027 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-04 01:14:30.521034 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-04 01:14:30.521041 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-04 01:14:30.521048 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:30.521055 | orchestrator | 2025-05-04 01:14:30.521062 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-04 01:14:30.521069 | orchestrator | 2025-05-04 01:14:30.521081 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-04 01:14:30.521088 | orchestrator | Sunday 04 May 2025 01:14:26 +0000 (0:00:01.574) 0:08:23.523 ************ 2025-05-04 01:14:30.521096 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-04 01:14:30.521103 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-04 01:14:30.521110 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:30.521117 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-04 01:14:30.521124 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-04 01:14:30.521131 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:30.521145 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-04 01:14:33.556112 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-04 01:14:33.556242 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:33.556261 | orchestrator | 2025-05-04 01:14:33.556277 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-04 01:14:33.556292 | orchestrator | 2025-05-04 01:14:33.556306 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-04 01:14:33.556321 | orchestrator | Sunday 04 May 2025 01:14:27 +0000 (0:00:00.653) 0:08:24.176 ************ 2025-05-04 01:14:33.556335 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:33.556349 | orchestrator | 2025-05-04 01:14:33.556363 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-04 01:14:33.556377 | orchestrator | 2025-05-04 01:14:33.556391 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-04 01:14:33.556405 | orchestrator | Sunday 04 May 2025 01:14:28 +0000 (0:00:01.060) 0:08:25.237 ************ 2025-05-04 01:14:33.556420 | orchestrator | skipping: [testbed-node-0] 2025-05-04 01:14:33.556434 | orchestrator | skipping: [testbed-node-1] 2025-05-04 01:14:33.556448 | orchestrator | skipping: [testbed-node-2] 2025-05-04 01:14:33.556462 | orchestrator | 2025-05-04 01:14:33.556476 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-04 01:14:33.556490 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-04 01:14:33.556507 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-04 01:14:33.556522 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-04 01:14:33.556536 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-04 01:14:33.556583 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-04 01:14:33.556598 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-04 01:14:33.556613 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-04 01:14:33.556627 | orchestrator | 2025-05-04 01:14:33.556644 | orchestrator | 2025-05-04 01:14:33.556661 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-04 01:14:33.556678 | orchestrator | Sunday 04 May 2025 01:14:28 +0000 (0:00:00.604) 0:08:25.841 ************ 2025-05-04 01:14:33.556696 | orchestrator | =============================================================================== 2025-05-04 01:14:33.556712 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.39s 2025-05-04 01:14:33.556728 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 28.99s 2025-05-04 01:14:33.556744 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.48s 2025-05-04 01:14:33.556760 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.81s 2025-05-04 01:14:33.556776 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.36s 2025-05-04 01:14:33.556792 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.64s 2025-05-04 01:14:33.556807 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.95s 2025-05-04 01:14:33.556823 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.19s 2025-05-04 01:14:33.556839 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.14s 2025-05-04 01:14:33.556886 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.75s 2025-05-04 01:14:33.556903 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.20s 2025-05-04 01:14:33.556918 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.04s 2025-05-04 01:14:33.556934 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.75s 2025-05-04 01:14:33.556950 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.38s 2025-05-04 01:14:33.556966 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.28s 2025-05-04 01:14:33.556982 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.22s 2025-05-04 01:14:33.556998 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.49s 2025-05-04 01:14:33.557012 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.20s 2025-05-04 01:14:33.557026 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.00s 2025-05-04 01:14:33.557041 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.00s 2025-05-04 01:14:33.557056 | orchestrator | 2025-05-04 01:14:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:33.557071 | orchestrator | 2025-05-04 01:14:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:33.557122 | orchestrator | 2025-05-04 01:14:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:36.602248 | orchestrator | 2025-05-04 01:14:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:36.602427 | orchestrator | 2025-05-04 01:14:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:39.651299 | orchestrator | 2025-05-04 01:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:39.651449 | orchestrator | 2025-05-04 01:14:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:42.698357 | orchestrator | 2025-05-04 01:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:42.698512 | orchestrator | 2025-05-04 01:14:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:45.749186 | orchestrator | 2025-05-04 01:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:45.749311 | orchestrator | 2025-05-04 01:14:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:48.805066 | orchestrator | 2025-05-04 01:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:48.805230 | orchestrator | 2025-05-04 01:14:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:51.854530 | orchestrator | 2025-05-04 01:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:51.854750 | orchestrator | 2025-05-04 01:14:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:51.854901 | orchestrator | 2025-05-04 01:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:54.905685 | orchestrator | 2025-05-04 01:14:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:14:57.955070 | orchestrator | 2025-05-04 01:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:14:57.955171 | orchestrator | 2025-05-04 01:14:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:01.004235 | orchestrator | 2025-05-04 01:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:01.004416 | orchestrator | 2025-05-04 01:15:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:04.051825 | orchestrator | 2025-05-04 01:15:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:04.051975 | orchestrator | 2025-05-04 01:15:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:07.100951 | orchestrator | 2025-05-04 01:15:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:07.101105 | orchestrator | 2025-05-04 01:15:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:10.148652 | orchestrator | 2025-05-04 01:15:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:10.148793 | orchestrator | 2025-05-04 01:15:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:13.199419 | orchestrator | 2025-05-04 01:15:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:13.199604 | orchestrator | 2025-05-04 01:15:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:16.254350 | orchestrator | 2025-05-04 01:15:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:16.254522 | orchestrator | 2025-05-04 01:15:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:19.303704 | orchestrator | 2025-05-04 01:15:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:19.303860 | orchestrator | 2025-05-04 01:15:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:22.352084 | orchestrator | 2025-05-04 01:15:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:22.352231 | orchestrator | 2025-05-04 01:15:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:25.392191 | orchestrator | 2025-05-04 01:15:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:25.392361 | orchestrator | 2025-05-04 01:15:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:28.446010 | orchestrator | 2025-05-04 01:15:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:28.446214 | orchestrator | 2025-05-04 01:15:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:31.499640 | orchestrator | 2025-05-04 01:15:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:31.499797 | orchestrator | 2025-05-04 01:15:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:34.551812 | orchestrator | 2025-05-04 01:15:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:34.551962 | orchestrator | 2025-05-04 01:15:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:37.602566 | orchestrator | 2025-05-04 01:15:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:37.602717 | orchestrator | 2025-05-04 01:15:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:40.652439 | orchestrator | 2025-05-04 01:15:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:40.652650 | orchestrator | 2025-05-04 01:15:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:43.701516 | orchestrator | 2025-05-04 01:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:43.701763 | orchestrator | 2025-05-04 01:15:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:46.749594 | orchestrator | 2025-05-04 01:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:46.749742 | orchestrator | 2025-05-04 01:15:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:49.804792 | orchestrator | 2025-05-04 01:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:49.804949 | orchestrator | 2025-05-04 01:15:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:52.854677 | orchestrator | 2025-05-04 01:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:52.854832 | orchestrator | 2025-05-04 01:15:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:55.901187 | orchestrator | 2025-05-04 01:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:55.901359 | orchestrator | 2025-05-04 01:15:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:15:58.954345 | orchestrator | 2025-05-04 01:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:15:58.954492 | orchestrator | 2025-05-04 01:15:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:02.003782 | orchestrator | 2025-05-04 01:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:02.003937 | orchestrator | 2025-05-04 01:16:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:05.052695 | orchestrator | 2025-05-04 01:16:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:05.052850 | orchestrator | 2025-05-04 01:16:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:08.098181 | orchestrator | 2025-05-04 01:16:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:08.098338 | orchestrator | 2025-05-04 01:16:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:11.146830 | orchestrator | 2025-05-04 01:16:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:11.146972 | orchestrator | 2025-05-04 01:16:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:14.200990 | orchestrator | 2025-05-04 01:16:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:14.201170 | orchestrator | 2025-05-04 01:16:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:17.256331 | orchestrator | 2025-05-04 01:16:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:17.256476 | orchestrator | 2025-05-04 01:16:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:20.312049 | orchestrator | 2025-05-04 01:16:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:20.312216 | orchestrator | 2025-05-04 01:16:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:23.368640 | orchestrator | 2025-05-04 01:16:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:23.368793 | orchestrator | 2025-05-04 01:16:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:26.425147 | orchestrator | 2025-05-04 01:16:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:26.425280 | orchestrator | 2025-05-04 01:16:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:29.485958 | orchestrator | 2025-05-04 01:16:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:29.486184 | orchestrator | 2025-05-04 01:16:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:32.536674 | orchestrator | 2025-05-04 01:16:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:32.536834 | orchestrator | 2025-05-04 01:16:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:35.590812 | orchestrator | 2025-05-04 01:16:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:35.590966 | orchestrator | 2025-05-04 01:16:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:38.643475 | orchestrator | 2025-05-04 01:16:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:38.643692 | orchestrator | 2025-05-04 01:16:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:41.699624 | orchestrator | 2025-05-04 01:16:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:41.699774 | orchestrator | 2025-05-04 01:16:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:44.749333 | orchestrator | 2025-05-04 01:16:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:44.749494 | orchestrator | 2025-05-04 01:16:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:47.798323 | orchestrator | 2025-05-04 01:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:47.798471 | orchestrator | 2025-05-04 01:16:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:50.847362 | orchestrator | 2025-05-04 01:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:50.847589 | orchestrator | 2025-05-04 01:16:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:53.897515 | orchestrator | 2025-05-04 01:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:53.897729 | orchestrator | 2025-05-04 01:16:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:56.945981 | orchestrator | 2025-05-04 01:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:56.946207 | orchestrator | 2025-05-04 01:16:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:16:59.990269 | orchestrator | 2025-05-04 01:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:16:59.990414 | orchestrator | 2025-05-04 01:16:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:03.038749 | orchestrator | 2025-05-04 01:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:03.038904 | orchestrator | 2025-05-04 01:17:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:06.085873 | orchestrator | 2025-05-04 01:17:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:06.086080 | orchestrator | 2025-05-04 01:17:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:09.131080 | orchestrator | 2025-05-04 01:17:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:09.131259 | orchestrator | 2025-05-04 01:17:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:12.173395 | orchestrator | 2025-05-04 01:17:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:12.173597 | orchestrator | 2025-05-04 01:17:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:15.228070 | orchestrator | 2025-05-04 01:17:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:15.228220 | orchestrator | 2025-05-04 01:17:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:18.276612 | orchestrator | 2025-05-04 01:17:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:18.276762 | orchestrator | 2025-05-04 01:17:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:21.329185 | orchestrator | 2025-05-04 01:17:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:21.329333 | orchestrator | 2025-05-04 01:17:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:24.376602 | orchestrator | 2025-05-04 01:17:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:24.376729 | orchestrator | 2025-05-04 01:17:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:27.425647 | orchestrator | 2025-05-04 01:17:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:27.425805 | orchestrator | 2025-05-04 01:17:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:30.473813 | orchestrator | 2025-05-04 01:17:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:30.473994 | orchestrator | 2025-05-04 01:17:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:33.526737 | orchestrator | 2025-05-04 01:17:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:33.526898 | orchestrator | 2025-05-04 01:17:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:36.581730 | orchestrator | 2025-05-04 01:17:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:36.581884 | orchestrator | 2025-05-04 01:17:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:39.628294 | orchestrator | 2025-05-04 01:17:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:39.630672 | orchestrator | 2025-05-04 01:17:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:42.674349 | orchestrator | 2025-05-04 01:17:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:42.674498 | orchestrator | 2025-05-04 01:17:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:45.728149 | orchestrator | 2025-05-04 01:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:45.728307 | orchestrator | 2025-05-04 01:17:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:48.781388 | orchestrator | 2025-05-04 01:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:48.781573 | orchestrator | 2025-05-04 01:17:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:51.827383 | orchestrator | 2025-05-04 01:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:51.827607 | orchestrator | 2025-05-04 01:17:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:54.873974 | orchestrator | 2025-05-04 01:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:54.874190 | orchestrator | 2025-05-04 01:17:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:17:57.920381 | orchestrator | 2025-05-04 01:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:17:57.920599 | orchestrator | 2025-05-04 01:17:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:00.971615 | orchestrator | 2025-05-04 01:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:00.971770 | orchestrator | 2025-05-04 01:18:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:04.020279 | orchestrator | 2025-05-04 01:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:04.020432 | orchestrator | 2025-05-04 01:18:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:07.076175 | orchestrator | 2025-05-04 01:18:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:07.076323 | orchestrator | 2025-05-04 01:18:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:10.123636 | orchestrator | 2025-05-04 01:18:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:10.123791 | orchestrator | 2025-05-04 01:18:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:13.171226 | orchestrator | 2025-05-04 01:18:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:13.171373 | orchestrator | 2025-05-04 01:18:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:16.217318 | orchestrator | 2025-05-04 01:18:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:16.217475 | orchestrator | 2025-05-04 01:18:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:19.269155 | orchestrator | 2025-05-04 01:18:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:19.269294 | orchestrator | 2025-05-04 01:18:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:22.321462 | orchestrator | 2025-05-04 01:18:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:22.321680 | orchestrator | 2025-05-04 01:18:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:25.370950 | orchestrator | 2025-05-04 01:18:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:25.371101 | orchestrator | 2025-05-04 01:18:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:28.419037 | orchestrator | 2025-05-04 01:18:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:28.419204 | orchestrator | 2025-05-04 01:18:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:31.458963 | orchestrator | 2025-05-04 01:18:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:31.459126 | orchestrator | 2025-05-04 01:18:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:34.512252 | orchestrator | 2025-05-04 01:18:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:34.512384 | orchestrator | 2025-05-04 01:18:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:37.562954 | orchestrator | 2025-05-04 01:18:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:37.563109 | orchestrator | 2025-05-04 01:18:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:40.615756 | orchestrator | 2025-05-04 01:18:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:40.616680 | orchestrator | 2025-05-04 01:18:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:43.667652 | orchestrator | 2025-05-04 01:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:43.667827 | orchestrator | 2025-05-04 01:18:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:46.723124 | orchestrator | 2025-05-04 01:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:46.723310 | orchestrator | 2025-05-04 01:18:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:46.723768 | orchestrator | 2025-05-04 01:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:49.768084 | orchestrator | 2025-05-04 01:18:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:52.816735 | orchestrator | 2025-05-04 01:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:52.816891 | orchestrator | 2025-05-04 01:18:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:55.866741 | orchestrator | 2025-05-04 01:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:55.866892 | orchestrator | 2025-05-04 01:18:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:18:58.914559 | orchestrator | 2025-05-04 01:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:18:58.914776 | orchestrator | 2025-05-04 01:18:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:01.967545 | orchestrator | 2025-05-04 01:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:01.967749 | orchestrator | 2025-05-04 01:19:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:05.020709 | orchestrator | 2025-05-04 01:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:05.020869 | orchestrator | 2025-05-04 01:19:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:08.072198 | orchestrator | 2025-05-04 01:19:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:08.072319 | orchestrator | 2025-05-04 01:19:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:11.124877 | orchestrator | 2025-05-04 01:19:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:11.125037 | orchestrator | 2025-05-04 01:19:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:14.169165 | orchestrator | 2025-05-04 01:19:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:14.169319 | orchestrator | 2025-05-04 01:19:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:17.218566 | orchestrator | 2025-05-04 01:19:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:17.218763 | orchestrator | 2025-05-04 01:19:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:20.271285 | orchestrator | 2025-05-04 01:19:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:20.271433 | orchestrator | 2025-05-04 01:19:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:23.325114 | orchestrator | 2025-05-04 01:19:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:23.325263 | orchestrator | 2025-05-04 01:19:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:26.379632 | orchestrator | 2025-05-04 01:19:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:26.379802 | orchestrator | 2025-05-04 01:19:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:29.433564 | orchestrator | 2025-05-04 01:19:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:29.433781 | orchestrator | 2025-05-04 01:19:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:32.480976 | orchestrator | 2025-05-04 01:19:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:32.481177 | orchestrator | 2025-05-04 01:19:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:35.530965 | orchestrator | 2025-05-04 01:19:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:35.531122 | orchestrator | 2025-05-04 01:19:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:38.581494 | orchestrator | 2025-05-04 01:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:38.581708 | orchestrator | 2025-05-04 01:19:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:41.634453 | orchestrator | 2025-05-04 01:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:41.634665 | orchestrator | 2025-05-04 01:19:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:44.688136 | orchestrator | 2025-05-04 01:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:44.688286 | orchestrator | 2025-05-04 01:19:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:47.738069 | orchestrator | 2025-05-04 01:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:47.738223 | orchestrator | 2025-05-04 01:19:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:50.786936 | orchestrator | 2025-05-04 01:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:50.787084 | orchestrator | 2025-05-04 01:19:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:53.833071 | orchestrator | 2025-05-04 01:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:53.833255 | orchestrator | 2025-05-04 01:19:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:56.879494 | orchestrator | 2025-05-04 01:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:56.879708 | orchestrator | 2025-05-04 01:19:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:19:59.925382 | orchestrator | 2025-05-04 01:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:19:59.925505 | orchestrator | 2025-05-04 01:19:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:02.977482 | orchestrator | 2025-05-04 01:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:02.977686 | orchestrator | 2025-05-04 01:20:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:06.024795 | orchestrator | 2025-05-04 01:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:06.024962 | orchestrator | 2025-05-04 01:20:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:09.075581 | orchestrator | 2025-05-04 01:20:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:09.075838 | orchestrator | 2025-05-04 01:20:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:12.122922 | orchestrator | 2025-05-04 01:20:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:12.123100 | orchestrator | 2025-05-04 01:20:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:15.175113 | orchestrator | 2025-05-04 01:20:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:15.175257 | orchestrator | 2025-05-04 01:20:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:18.229037 | orchestrator | 2025-05-04 01:20:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:18.229193 | orchestrator | 2025-05-04 01:20:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:21.278921 | orchestrator | 2025-05-04 01:20:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:21.279110 | orchestrator | 2025-05-04 01:20:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:24.333036 | orchestrator | 2025-05-04 01:20:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:24.333170 | orchestrator | 2025-05-04 01:20:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:27.384679 | orchestrator | 2025-05-04 01:20:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:27.384833 | orchestrator | 2025-05-04 01:20:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:30.427676 | orchestrator | 2025-05-04 01:20:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:30.427798 | orchestrator | 2025-05-04 01:20:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:33.483973 | orchestrator | 2025-05-04 01:20:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:33.484118 | orchestrator | 2025-05-04 01:20:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:36.532205 | orchestrator | 2025-05-04 01:20:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:36.532379 | orchestrator | 2025-05-04 01:20:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:39.581598 | orchestrator | 2025-05-04 01:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:39.581845 | orchestrator | 2025-05-04 01:20:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:42.641132 | orchestrator | 2025-05-04 01:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:42.641304 | orchestrator | 2025-05-04 01:20:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:45.694374 | orchestrator | 2025-05-04 01:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:45.694516 | orchestrator | 2025-05-04 01:20:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:48.744542 | orchestrator | 2025-05-04 01:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:48.744745 | orchestrator | 2025-05-04 01:20:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:51.797474 | orchestrator | 2025-05-04 01:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:51.797673 | orchestrator | 2025-05-04 01:20:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:54.847442 | orchestrator | 2025-05-04 01:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:54.847595 | orchestrator | 2025-05-04 01:20:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:20:57.897799 | orchestrator | 2025-05-04 01:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:20:57.897946 | orchestrator | 2025-05-04 01:20:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:00.951511 | orchestrator | 2025-05-04 01:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:00.951704 | orchestrator | 2025-05-04 01:21:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:04.018735 | orchestrator | 2025-05-04 01:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:04.018881 | orchestrator | 2025-05-04 01:21:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:07.067543 | orchestrator | 2025-05-04 01:21:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:07.067763 | orchestrator | 2025-05-04 01:21:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:10.126576 | orchestrator | 2025-05-04 01:21:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:10.126801 | orchestrator | 2025-05-04 01:21:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:13.177891 | orchestrator | 2025-05-04 01:21:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:13.178126 | orchestrator | 2025-05-04 01:21:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:16.227877 | orchestrator | 2025-05-04 01:21:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:16.228048 | orchestrator | 2025-05-04 01:21:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:19.275777 | orchestrator | 2025-05-04 01:21:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:19.275934 | orchestrator | 2025-05-04 01:21:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:22.327129 | orchestrator | 2025-05-04 01:21:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:22.327278 | orchestrator | 2025-05-04 01:21:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:25.381969 | orchestrator | 2025-05-04 01:21:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:25.382209 | orchestrator | 2025-05-04 01:21:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:28.443778 | orchestrator | 2025-05-04 01:21:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:28.443920 | orchestrator | 2025-05-04 01:21:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:31.503480 | orchestrator | 2025-05-04 01:21:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:31.503643 | orchestrator | 2025-05-04 01:21:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:34.551760 | orchestrator | 2025-05-04 01:21:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:34.551905 | orchestrator | 2025-05-04 01:21:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:37.605334 | orchestrator | 2025-05-04 01:21:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:37.605484 | orchestrator | 2025-05-04 01:21:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:40.653645 | orchestrator | 2025-05-04 01:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:40.653861 | orchestrator | 2025-05-04 01:21:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:43.731746 | orchestrator | 2025-05-04 01:21:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:43.731902 | orchestrator | 2025-05-04 01:21:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:46.788648 | orchestrator | 2025-05-04 01:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:46.788874 | orchestrator | 2025-05-04 01:21:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:49.836853 | orchestrator | 2025-05-04 01:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:49.836999 | orchestrator | 2025-05-04 01:21:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:52.883303 | orchestrator | 2025-05-04 01:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:52.883497 | orchestrator | 2025-05-04 01:21:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:55.934525 | orchestrator | 2025-05-04 01:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:55.934740 | orchestrator | 2025-05-04 01:21:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:21:58.990244 | orchestrator | 2025-05-04 01:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:21:58.990394 | orchestrator | 2025-05-04 01:21:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:02.046437 | orchestrator | 2025-05-04 01:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:02.046586 | orchestrator | 2025-05-04 01:22:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:05.101482 | orchestrator | 2025-05-04 01:22:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:05.101636 | orchestrator | 2025-05-04 01:22:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:08.155949 | orchestrator | 2025-05-04 01:22:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:08.156100 | orchestrator | 2025-05-04 01:22:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:11.206397 | orchestrator | 2025-05-04 01:22:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:11.206552 | orchestrator | 2025-05-04 01:22:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:14.255985 | orchestrator | 2025-05-04 01:22:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:14.256138 | orchestrator | 2025-05-04 01:22:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:17.312795 | orchestrator | 2025-05-04 01:22:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:17.312950 | orchestrator | 2025-05-04 01:22:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:20.362329 | orchestrator | 2025-05-04 01:22:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:20.362482 | orchestrator | 2025-05-04 01:22:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:20.362823 | orchestrator | 2025-05-04 01:22:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:23.411398 | orchestrator | 2025-05-04 01:22:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:26.456958 | orchestrator | 2025-05-04 01:22:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:26.457107 | orchestrator | 2025-05-04 01:22:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:29.505414 | orchestrator | 2025-05-04 01:22:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:29.505570 | orchestrator | 2025-05-04 01:22:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:32.550874 | orchestrator | 2025-05-04 01:22:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:32.551026 | orchestrator | 2025-05-04 01:22:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:35.600760 | orchestrator | 2025-05-04 01:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:35.600913 | orchestrator | 2025-05-04 01:22:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:38.657894 | orchestrator | 2025-05-04 01:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:38.658056 | orchestrator | 2025-05-04 01:22:38 | INFO  | Task d664ea18-e6c2-41f5-b979-c1c04c31896a is in state STARTED 2025-05-04 01:22:38.659112 | orchestrator | 2025-05-04 01:22:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:38.659133 | orchestrator | 2025-05-04 01:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:41.730594 | orchestrator | 2025-05-04 01:22:41 | INFO  | Task d664ea18-e6c2-41f5-b979-c1c04c31896a is in state STARTED 2025-05-04 01:22:41.732048 | orchestrator | 2025-05-04 01:22:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:44.795575 | orchestrator | 2025-05-04 01:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:44.795771 | orchestrator | 2025-05-04 01:22:44 | INFO  | Task d664ea18-e6c2-41f5-b979-c1c04c31896a is in state STARTED 2025-05-04 01:22:44.797066 | orchestrator | 2025-05-04 01:22:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:47.855450 | orchestrator | 2025-05-04 01:22:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:47.855607 | orchestrator | 2025-05-04 01:22:47 | INFO  | Task d664ea18-e6c2-41f5-b979-c1c04c31896a is in state SUCCESS 2025-05-04 01:22:47.856505 | orchestrator | 2025-05-04 01:22:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:50.908934 | orchestrator | 2025-05-04 01:22:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:50.909081 | orchestrator | 2025-05-04 01:22:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:53.954993 | orchestrator | 2025-05-04 01:22:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:53.955199 | orchestrator | 2025-05-04 01:22:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:22:57.008058 | orchestrator | 2025-05-04 01:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:22:57.008210 | orchestrator | 2025-05-04 01:22:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:00.065347 | orchestrator | 2025-05-04 01:22:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:00.065497 | orchestrator | 2025-05-04 01:23:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:03.118637 | orchestrator | 2025-05-04 01:23:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:03.118816 | orchestrator | 2025-05-04 01:23:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:06.171243 | orchestrator | 2025-05-04 01:23:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:06.171397 | orchestrator | 2025-05-04 01:23:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:09.224250 | orchestrator | 2025-05-04 01:23:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:09.224397 | orchestrator | 2025-05-04 01:23:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:12.280566 | orchestrator | 2025-05-04 01:23:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:12.280794 | orchestrator | 2025-05-04 01:23:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:15.331487 | orchestrator | 2025-05-04 01:23:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:15.331631 | orchestrator | 2025-05-04 01:23:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:18.380067 | orchestrator | 2025-05-04 01:23:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:18.380243 | orchestrator | 2025-05-04 01:23:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:21.430355 | orchestrator | 2025-05-04 01:23:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:21.430534 | orchestrator | 2025-05-04 01:23:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:24.484627 | orchestrator | 2025-05-04 01:23:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:24.484869 | orchestrator | 2025-05-04 01:23:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:27.528431 | orchestrator | 2025-05-04 01:23:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:27.528591 | orchestrator | 2025-05-04 01:23:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:30.577005 | orchestrator | 2025-05-04 01:23:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:30.577159 | orchestrator | 2025-05-04 01:23:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:33.628413 | orchestrator | 2025-05-04 01:23:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:33.628568 | orchestrator | 2025-05-04 01:23:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:36.671463 | orchestrator | 2025-05-04 01:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:36.671644 | orchestrator | 2025-05-04 01:23:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:39.727941 | orchestrator | 2025-05-04 01:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:39.728089 | orchestrator | 2025-05-04 01:23:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:42.779406 | orchestrator | 2025-05-04 01:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:42.779560 | orchestrator | 2025-05-04 01:23:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:45.830750 | orchestrator | 2025-05-04 01:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:45.830907 | orchestrator | 2025-05-04 01:23:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:48.874943 | orchestrator | 2025-05-04 01:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:48.875123 | orchestrator | 2025-05-04 01:23:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:51.926931 | orchestrator | 2025-05-04 01:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:51.927076 | orchestrator | 2025-05-04 01:23:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:54.980754 | orchestrator | 2025-05-04 01:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:54.980911 | orchestrator | 2025-05-04 01:23:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:23:58.033764 | orchestrator | 2025-05-04 01:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:23:58.033919 | orchestrator | 2025-05-04 01:23:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:01.081547 | orchestrator | 2025-05-04 01:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:01.081706 | orchestrator | 2025-05-04 01:24:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:04.135568 | orchestrator | 2025-05-04 01:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:04.135817 | orchestrator | 2025-05-04 01:24:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:07.187050 | orchestrator | 2025-05-04 01:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:07.187212 | orchestrator | 2025-05-04 01:24:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:10.241325 | orchestrator | 2025-05-04 01:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:10.241480 | orchestrator | 2025-05-04 01:24:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:13.293180 | orchestrator | 2025-05-04 01:24:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:13.293331 | orchestrator | 2025-05-04 01:24:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:16.341373 | orchestrator | 2025-05-04 01:24:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:16.341520 | orchestrator | 2025-05-04 01:24:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:19.391144 | orchestrator | 2025-05-04 01:24:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:19.391299 | orchestrator | 2025-05-04 01:24:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:22.444279 | orchestrator | 2025-05-04 01:24:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:22.444438 | orchestrator | 2025-05-04 01:24:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:25.489467 | orchestrator | 2025-05-04 01:24:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:25.489577 | orchestrator | 2025-05-04 01:24:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:28.543267 | orchestrator | 2025-05-04 01:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:28.543454 | orchestrator | 2025-05-04 01:24:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:31.605836 | orchestrator | 2025-05-04 01:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:31.605997 | orchestrator | 2025-05-04 01:24:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:34.658777 | orchestrator | 2025-05-04 01:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:34.658924 | orchestrator | 2025-05-04 01:24:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:37.714518 | orchestrator | 2025-05-04 01:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:37.714682 | orchestrator | 2025-05-04 01:24:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:40.767276 | orchestrator | 2025-05-04 01:24:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:40.767438 | orchestrator | 2025-05-04 01:24:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:43.812188 | orchestrator | 2025-05-04 01:24:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:43.812342 | orchestrator | 2025-05-04 01:24:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:46.863869 | orchestrator | 2025-05-04 01:24:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:46.864028 | orchestrator | 2025-05-04 01:24:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:49.907323 | orchestrator | 2025-05-04 01:24:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:49.907514 | orchestrator | 2025-05-04 01:24:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:52.963536 | orchestrator | 2025-05-04 01:24:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:52.963686 | orchestrator | 2025-05-04 01:24:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:56.013029 | orchestrator | 2025-05-04 01:24:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:56.013183 | orchestrator | 2025-05-04 01:24:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:24:59.061547 | orchestrator | 2025-05-04 01:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:24:59.061698 | orchestrator | 2025-05-04 01:24:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:02.111012 | orchestrator | 2025-05-04 01:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:02.111188 | orchestrator | 2025-05-04 01:25:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:05.155034 | orchestrator | 2025-05-04 01:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:05.155196 | orchestrator | 2025-05-04 01:25:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:08.215807 | orchestrator | 2025-05-04 01:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:08.215983 | orchestrator | 2025-05-04 01:25:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:11.267749 | orchestrator | 2025-05-04 01:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:11.267937 | orchestrator | 2025-05-04 01:25:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:14.320870 | orchestrator | 2025-05-04 01:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:14.321036 | orchestrator | 2025-05-04 01:25:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:17.376043 | orchestrator | 2025-05-04 01:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:17.376184 | orchestrator | 2025-05-04 01:25:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:20.427564 | orchestrator | 2025-05-04 01:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:20.427785 | orchestrator | 2025-05-04 01:25:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:23.480616 | orchestrator | 2025-05-04 01:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:23.480827 | orchestrator | 2025-05-04 01:25:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:26.537020 | orchestrator | 2025-05-04 01:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:26.537189 | orchestrator | 2025-05-04 01:25:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:29.583806 | orchestrator | 2025-05-04 01:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:29.583972 | orchestrator | 2025-05-04 01:25:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:32.631478 | orchestrator | 2025-05-04 01:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:32.631705 | orchestrator | 2025-05-04 01:25:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:35.681506 | orchestrator | 2025-05-04 01:25:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:35.681699 | orchestrator | 2025-05-04 01:25:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:38.731281 | orchestrator | 2025-05-04 01:25:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:38.731454 | orchestrator | 2025-05-04 01:25:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:41.786139 | orchestrator | 2025-05-04 01:25:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:41.786307 | orchestrator | 2025-05-04 01:25:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:44.839981 | orchestrator | 2025-05-04 01:25:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:44.840149 | orchestrator | 2025-05-04 01:25:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:47.891544 | orchestrator | 2025-05-04 01:25:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:47.891781 | orchestrator | 2025-05-04 01:25:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:50.943318 | orchestrator | 2025-05-04 01:25:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:50.943472 | orchestrator | 2025-05-04 01:25:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:53.996120 | orchestrator | 2025-05-04 01:25:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:53.996293 | orchestrator | 2025-05-04 01:25:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:25:57.048412 | orchestrator | 2025-05-04 01:25:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:25:57.048641 | orchestrator | 2025-05-04 01:25:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:00.089250 | orchestrator | 2025-05-04 01:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:00.089392 | orchestrator | 2025-05-04 01:26:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:03.136184 | orchestrator | 2025-05-04 01:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:03.136339 | orchestrator | 2025-05-04 01:26:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:06.184774 | orchestrator | 2025-05-04 01:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:06.184933 | orchestrator | 2025-05-04 01:26:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:09.239661 | orchestrator | 2025-05-04 01:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:09.239843 | orchestrator | 2025-05-04 01:26:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:12.289708 | orchestrator | 2025-05-04 01:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:12.289852 | orchestrator | 2025-05-04 01:26:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:15.349996 | orchestrator | 2025-05-04 01:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:15.350208 | orchestrator | 2025-05-04 01:26:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:18.401816 | orchestrator | 2025-05-04 01:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:18.401969 | orchestrator | 2025-05-04 01:26:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:21.451361 | orchestrator | 2025-05-04 01:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:21.451601 | orchestrator | 2025-05-04 01:26:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:24.508512 | orchestrator | 2025-05-04 01:26:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:24.508704 | orchestrator | 2025-05-04 01:26:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:27.562720 | orchestrator | 2025-05-04 01:26:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:27.562887 | orchestrator | 2025-05-04 01:26:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:30.610246 | orchestrator | 2025-05-04 01:26:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:30.610381 | orchestrator | 2025-05-04 01:26:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:33.659902 | orchestrator | 2025-05-04 01:26:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:33.660080 | orchestrator | 2025-05-04 01:26:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:36.713153 | orchestrator | 2025-05-04 01:26:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:36.713304 | orchestrator | 2025-05-04 01:26:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:39.753788 | orchestrator | 2025-05-04 01:26:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:39.753973 | orchestrator | 2025-05-04 01:26:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:42.805593 | orchestrator | 2025-05-04 01:26:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:42.805755 | orchestrator | 2025-05-04 01:26:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:45.859897 | orchestrator | 2025-05-04 01:26:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:45.860063 | orchestrator | 2025-05-04 01:26:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:48.914841 | orchestrator | 2025-05-04 01:26:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:48.914944 | orchestrator | 2025-05-04 01:26:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:51.970461 | orchestrator | 2025-05-04 01:26:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:51.970622 | orchestrator | 2025-05-04 01:26:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:55.026851 | orchestrator | 2025-05-04 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:55.027014 | orchestrator | 2025-05-04 01:26:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:26:58.078393 | orchestrator | 2025-05-04 01:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:26:58.078533 | orchestrator | 2025-05-04 01:26:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:01.134909 | orchestrator | 2025-05-04 01:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:01.135069 | orchestrator | 2025-05-04 01:27:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:04.181887 | orchestrator | 2025-05-04 01:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:04.182082 | orchestrator | 2025-05-04 01:27:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:07.227625 | orchestrator | 2025-05-04 01:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:07.227770 | orchestrator | 2025-05-04 01:27:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:10.279451 | orchestrator | 2025-05-04 01:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:10.279614 | orchestrator | 2025-05-04 01:27:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:13.335863 | orchestrator | 2025-05-04 01:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:13.336006 | orchestrator | 2025-05-04 01:27:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:16.398465 | orchestrator | 2025-05-04 01:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:16.398620 | orchestrator | 2025-05-04 01:27:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:19.449618 | orchestrator | 2025-05-04 01:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:19.449776 | orchestrator | 2025-05-04 01:27:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:22.503142 | orchestrator | 2025-05-04 01:27:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:22.503376 | orchestrator | 2025-05-04 01:27:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:25.556780 | orchestrator | 2025-05-04 01:27:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:25.556931 | orchestrator | 2025-05-04 01:27:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:28.605955 | orchestrator | 2025-05-04 01:27:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:28.606198 | orchestrator | 2025-05-04 01:27:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:31.659557 | orchestrator | 2025-05-04 01:27:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:31.659710 | orchestrator | 2025-05-04 01:27:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:34.711336 | orchestrator | 2025-05-04 01:27:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:34.711506 | orchestrator | 2025-05-04 01:27:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:37.764832 | orchestrator | 2025-05-04 01:27:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:37.764991 | orchestrator | 2025-05-04 01:27:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:40.817519 | orchestrator | 2025-05-04 01:27:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:40.817700 | orchestrator | 2025-05-04 01:27:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:43.871352 | orchestrator | 2025-05-04 01:27:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:43.871503 | orchestrator | 2025-05-04 01:27:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:46.914127 | orchestrator | 2025-05-04 01:27:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:46.914334 | orchestrator | 2025-05-04 01:27:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:49.960981 | orchestrator | 2025-05-04 01:27:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:49.961166 | orchestrator | 2025-05-04 01:27:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:53.008856 | orchestrator | 2025-05-04 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:53.009013 | orchestrator | 2025-05-04 01:27:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:56.059718 | orchestrator | 2025-05-04 01:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:56.059843 | orchestrator | 2025-05-04 01:27:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:27:59.109719 | orchestrator | 2025-05-04 01:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:27:59.109876 | orchestrator | 2025-05-04 01:27:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:02.161231 | orchestrator | 2025-05-04 01:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:02.161388 | orchestrator | 2025-05-04 01:28:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:05.214741 | orchestrator | 2025-05-04 01:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:05.214891 | orchestrator | 2025-05-04 01:28:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:08.267523 | orchestrator | 2025-05-04 01:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:08.267677 | orchestrator | 2025-05-04 01:28:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:11.324373 | orchestrator | 2025-05-04 01:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:11.324569 | orchestrator | 2025-05-04 01:28:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:14.382076 | orchestrator | 2025-05-04 01:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:14.382289 | orchestrator | 2025-05-04 01:28:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:17.428991 | orchestrator | 2025-05-04 01:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:17.429142 | orchestrator | 2025-05-04 01:28:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:20.482573 | orchestrator | 2025-05-04 01:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:20.482723 | orchestrator | 2025-05-04 01:28:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:23.536380 | orchestrator | 2025-05-04 01:28:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:23.536536 | orchestrator | 2025-05-04 01:28:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:26.596097 | orchestrator | 2025-05-04 01:28:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:26.596330 | orchestrator | 2025-05-04 01:28:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:29.645250 | orchestrator | 2025-05-04 01:28:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:29.645379 | orchestrator | 2025-05-04 01:28:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:32.699153 | orchestrator | 2025-05-04 01:28:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:32.699304 | orchestrator | 2025-05-04 01:28:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:35.744151 | orchestrator | 2025-05-04 01:28:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:35.744294 | orchestrator | 2025-05-04 01:28:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:38.795734 | orchestrator | 2025-05-04 01:28:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:38.795906 | orchestrator | 2025-05-04 01:28:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:41.845062 | orchestrator | 2025-05-04 01:28:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:41.845262 | orchestrator | 2025-05-04 01:28:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:44.891562 | orchestrator | 2025-05-04 01:28:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:44.891708 | orchestrator | 2025-05-04 01:28:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:47.936328 | orchestrator | 2025-05-04 01:28:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:47.936492 | orchestrator | 2025-05-04 01:28:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:50.990339 | orchestrator | 2025-05-04 01:28:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:50.990497 | orchestrator | 2025-05-04 01:28:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:54.045448 | orchestrator | 2025-05-04 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:54.045607 | orchestrator | 2025-05-04 01:28:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:28:57.096269 | orchestrator | 2025-05-04 01:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:28:57.096415 | orchestrator | 2025-05-04 01:28:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:00.147837 | orchestrator | 2025-05-04 01:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:00.148014 | orchestrator | 2025-05-04 01:29:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:03.200559 | orchestrator | 2025-05-04 01:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:03.200693 | orchestrator | 2025-05-04 01:29:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:06.256252 | orchestrator | 2025-05-04 01:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:06.256388 | orchestrator | 2025-05-04 01:29:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:09.309235 | orchestrator | 2025-05-04 01:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:09.309375 | orchestrator | 2025-05-04 01:29:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:12.365478 | orchestrator | 2025-05-04 01:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:12.365635 | orchestrator | 2025-05-04 01:29:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:15.415279 | orchestrator | 2025-05-04 01:29:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:15.415430 | orchestrator | 2025-05-04 01:29:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:18.471166 | orchestrator | 2025-05-04 01:29:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:18.471330 | orchestrator | 2025-05-04 01:29:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:21.516833 | orchestrator | 2025-05-04 01:29:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:21.517091 | orchestrator | 2025-05-04 01:29:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:24.564539 | orchestrator | 2025-05-04 01:29:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:24.564700 | orchestrator | 2025-05-04 01:29:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:27.613609 | orchestrator | 2025-05-04 01:29:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:27.613750 | orchestrator | 2025-05-04 01:29:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:30.666863 | orchestrator | 2025-05-04 01:29:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:30.667035 | orchestrator | 2025-05-04 01:29:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:33.718194 | orchestrator | 2025-05-04 01:29:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:33.718352 | orchestrator | 2025-05-04 01:29:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:36.770513 | orchestrator | 2025-05-04 01:29:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:36.770669 | orchestrator | 2025-05-04 01:29:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:39.821941 | orchestrator | 2025-05-04 01:29:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:39.822241 | orchestrator | 2025-05-04 01:29:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:42.871464 | orchestrator | 2025-05-04 01:29:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:42.871636 | orchestrator | 2025-05-04 01:29:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:45.923624 | orchestrator | 2025-05-04 01:29:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:45.923782 | orchestrator | 2025-05-04 01:29:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:48.977797 | orchestrator | 2025-05-04 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:48.978008 | orchestrator | 2025-05-04 01:29:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:52.031684 | orchestrator | 2025-05-04 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:52.031837 | orchestrator | 2025-05-04 01:29:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:55.080370 | orchestrator | 2025-05-04 01:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:55.080531 | orchestrator | 2025-05-04 01:29:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:29:58.136438 | orchestrator | 2025-05-04 01:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:29:58.136587 | orchestrator | 2025-05-04 01:29:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:01.177194 | orchestrator | 2025-05-04 01:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:01.177347 | orchestrator | 2025-05-04 01:30:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:04.220737 | orchestrator | 2025-05-04 01:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:04.220891 | orchestrator | 2025-05-04 01:30:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:07.277552 | orchestrator | 2025-05-04 01:30:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:07.277721 | orchestrator | 2025-05-04 01:30:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:10.327372 | orchestrator | 2025-05-04 01:30:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:10.327495 | orchestrator | 2025-05-04 01:30:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:13.391088 | orchestrator | 2025-05-04 01:30:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:13.391245 | orchestrator | 2025-05-04 01:30:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:16.440680 | orchestrator | 2025-05-04 01:30:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:16.440828 | orchestrator | 2025-05-04 01:30:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:19.495039 | orchestrator | 2025-05-04 01:30:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:19.495194 | orchestrator | 2025-05-04 01:30:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:22.546666 | orchestrator | 2025-05-04 01:30:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:22.546858 | orchestrator | 2025-05-04 01:30:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:25.602353 | orchestrator | 2025-05-04 01:30:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:25.602519 | orchestrator | 2025-05-04 01:30:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:28.663921 | orchestrator | 2025-05-04 01:30:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:28.664077 | orchestrator | 2025-05-04 01:30:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:31.721781 | orchestrator | 2025-05-04 01:30:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:31.721993 | orchestrator | 2025-05-04 01:30:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:34.770668 | orchestrator | 2025-05-04 01:30:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:34.770814 | orchestrator | 2025-05-04 01:30:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:37.825278 | orchestrator | 2025-05-04 01:30:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:37.825458 | orchestrator | 2025-05-04 01:30:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:40.876428 | orchestrator | 2025-05-04 01:30:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:40.876582 | orchestrator | 2025-05-04 01:30:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:43.933074 | orchestrator | 2025-05-04 01:30:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:43.933242 | orchestrator | 2025-05-04 01:30:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:46.983232 | orchestrator | 2025-05-04 01:30:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:46.983421 | orchestrator | 2025-05-04 01:30:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:50.038170 | orchestrator | 2025-05-04 01:30:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:50.038370 | orchestrator | 2025-05-04 01:30:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:53.090458 | orchestrator | 2025-05-04 01:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:53.090618 | orchestrator | 2025-05-04 01:30:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:56.146662 | orchestrator | 2025-05-04 01:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:56.146873 | orchestrator | 2025-05-04 01:30:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:30:59.201077 | orchestrator | 2025-05-04 01:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:30:59.201239 | orchestrator | 2025-05-04 01:30:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:02.251429 | orchestrator | 2025-05-04 01:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:02.251585 | orchestrator | 2025-05-04 01:31:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:05.300111 | orchestrator | 2025-05-04 01:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:05.300359 | orchestrator | 2025-05-04 01:31:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:08.344596 | orchestrator | 2025-05-04 01:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:08.344751 | orchestrator | 2025-05-04 01:31:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:11.395364 | orchestrator | 2025-05-04 01:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:11.395534 | orchestrator | 2025-05-04 01:31:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:14.452890 | orchestrator | 2025-05-04 01:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:14.453051 | orchestrator | 2025-05-04 01:31:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:17.505421 | orchestrator | 2025-05-04 01:31:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:17.505565 | orchestrator | 2025-05-04 01:31:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:20.559900 | orchestrator | 2025-05-04 01:31:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:20.560066 | orchestrator | 2025-05-04 01:31:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:23.612220 | orchestrator | 2025-05-04 01:31:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:23.612384 | orchestrator | 2025-05-04 01:31:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:26.664628 | orchestrator | 2025-05-04 01:31:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:26.664851 | orchestrator | 2025-05-04 01:31:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:29.713225 | orchestrator | 2025-05-04 01:31:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:29.713399 | orchestrator | 2025-05-04 01:31:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:32.764336 | orchestrator | 2025-05-04 01:31:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:32.764509 | orchestrator | 2025-05-04 01:31:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:35.821019 | orchestrator | 2025-05-04 01:31:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:35.821176 | orchestrator | 2025-05-04 01:31:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:38.874388 | orchestrator | 2025-05-04 01:31:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:38.874579 | orchestrator | 2025-05-04 01:31:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:41.928944 | orchestrator | 2025-05-04 01:31:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:41.929109 | orchestrator | 2025-05-04 01:31:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:44.978331 | orchestrator | 2025-05-04 01:31:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:44.978494 | orchestrator | 2025-05-04 01:31:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:48.034006 | orchestrator | 2025-05-04 01:31:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:48.034225 | orchestrator | 2025-05-04 01:31:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:51.083364 | orchestrator | 2025-05-04 01:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:51.083553 | orchestrator | 2025-05-04 01:31:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:54.138333 | orchestrator | 2025-05-04 01:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:54.138477 | orchestrator | 2025-05-04 01:31:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:31:57.191384 | orchestrator | 2025-05-04 01:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:31:57.191525 | orchestrator | 2025-05-04 01:31:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:00.241127 | orchestrator | 2025-05-04 01:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:00.241277 | orchestrator | 2025-05-04 01:32:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:03.287402 | orchestrator | 2025-05-04 01:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:03.287561 | orchestrator | 2025-05-04 01:32:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:06.327544 | orchestrator | 2025-05-04 01:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:06.327791 | orchestrator | 2025-05-04 01:32:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:09.380807 | orchestrator | 2025-05-04 01:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:09.380947 | orchestrator | 2025-05-04 01:32:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:12.428523 | orchestrator | 2025-05-04 01:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:12.428675 | orchestrator | 2025-05-04 01:32:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:15.471564 | orchestrator | 2025-05-04 01:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:15.471745 | orchestrator | 2025-05-04 01:32:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:18.522997 | orchestrator | 2025-05-04 01:32:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:18.523158 | orchestrator | 2025-05-04 01:32:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:21.576168 | orchestrator | 2025-05-04 01:32:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:21.576316 | orchestrator | 2025-05-04 01:32:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:24.626466 | orchestrator | 2025-05-04 01:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:24.626654 | orchestrator | 2025-05-04 01:32:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:27.676518 | orchestrator | 2025-05-04 01:32:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:27.676670 | orchestrator | 2025-05-04 01:32:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:30.727341 | orchestrator | 2025-05-04 01:32:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:30.727502 | orchestrator | 2025-05-04 01:32:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:33.782329 | orchestrator | 2025-05-04 01:32:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:33.782482 | orchestrator | 2025-05-04 01:32:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:36.835658 | orchestrator | 2025-05-04 01:32:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:36.835852 | orchestrator | 2025-05-04 01:32:36 | INFO  | Task 0ed32088-d305-47f9-8e76-2522c245013e is in state STARTED 2025-05-04 01:32:36.836576 | orchestrator | 2025-05-04 01:32:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:39.892968 | orchestrator | 2025-05-04 01:32:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:39.893132 | orchestrator | 2025-05-04 01:32:39 | INFO  | Task 0ed32088-d305-47f9-8e76-2522c245013e is in state STARTED 2025-05-04 01:32:39.893489 | orchestrator | 2025-05-04 01:32:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:39.894143 | orchestrator | 2025-05-04 01:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:42.957473 | orchestrator | 2025-05-04 01:32:42 | INFO  | Task 0ed32088-d305-47f9-8e76-2522c245013e is in state STARTED 2025-05-04 01:32:42.958420 | orchestrator | 2025-05-04 01:32:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:46.021303 | orchestrator | 2025-05-04 01:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:46.021456 | orchestrator | 2025-05-04 01:32:46 | INFO  | Task 0ed32088-d305-47f9-8e76-2522c245013e is in state STARTED 2025-05-04 01:32:49.073240 | orchestrator | 2025-05-04 01:32:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:49.073367 | orchestrator | 2025-05-04 01:32:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:49.073403 | orchestrator | 2025-05-04 01:32:49 | INFO  | Task 0ed32088-d305-47f9-8e76-2522c245013e is in state SUCCESS 2025-05-04 01:32:52.122692 | orchestrator | 2025-05-04 01:32:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:52.122830 | orchestrator | 2025-05-04 01:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:52.122868 | orchestrator | 2025-05-04 01:32:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:55.171167 | orchestrator | 2025-05-04 01:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:55.171332 | orchestrator | 2025-05-04 01:32:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:32:58.223285 | orchestrator | 2025-05-04 01:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:32:58.223460 | orchestrator | 2025-05-04 01:32:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:01.263185 | orchestrator | 2025-05-04 01:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:01.263352 | orchestrator | 2025-05-04 01:33:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:04.315115 | orchestrator | 2025-05-04 01:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:04.315283 | orchestrator | 2025-05-04 01:33:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:07.363953 | orchestrator | 2025-05-04 01:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:07.364124 | orchestrator | 2025-05-04 01:33:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:10.412128 | orchestrator | 2025-05-04 01:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:10.412298 | orchestrator | 2025-05-04 01:33:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:10.412590 | orchestrator | 2025-05-04 01:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:13.464871 | orchestrator | 2025-05-04 01:33:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:16.517831 | orchestrator | 2025-05-04 01:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:16.518011 | orchestrator | 2025-05-04 01:33:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:19.576377 | orchestrator | 2025-05-04 01:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:19.576490 | orchestrator | 2025-05-04 01:33:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:22.621651 | orchestrator | 2025-05-04 01:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:22.621816 | orchestrator | 2025-05-04 01:33:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:25.668647 | orchestrator | 2025-05-04 01:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:25.668825 | orchestrator | 2025-05-04 01:33:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:28.715318 | orchestrator | 2025-05-04 01:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:28.715471 | orchestrator | 2025-05-04 01:33:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:31.762315 | orchestrator | 2025-05-04 01:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:31.762465 | orchestrator | 2025-05-04 01:33:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:34.807264 | orchestrator | 2025-05-04 01:33:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:34.807386 | orchestrator | 2025-05-04 01:33:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:37.847472 | orchestrator | 2025-05-04 01:33:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:37.847641 | orchestrator | 2025-05-04 01:33:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:40.902921 | orchestrator | 2025-05-04 01:33:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:40.903081 | orchestrator | 2025-05-04 01:33:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:43.956668 | orchestrator | 2025-05-04 01:33:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:43.956775 | orchestrator | 2025-05-04 01:33:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:47.013357 | orchestrator | 2025-05-04 01:33:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:47.013528 | orchestrator | 2025-05-04 01:33:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:50.062657 | orchestrator | 2025-05-04 01:33:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:50.062808 | orchestrator | 2025-05-04 01:33:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:53.112771 | orchestrator | 2025-05-04 01:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:53.112923 | orchestrator | 2025-05-04 01:33:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:56.177693 | orchestrator | 2025-05-04 01:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:56.177857 | orchestrator | 2025-05-04 01:33:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:33:59.228385 | orchestrator | 2025-05-04 01:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:33:59.228548 | orchestrator | 2025-05-04 01:33:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:02.281538 | orchestrator | 2025-05-04 01:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:02.281739 | orchestrator | 2025-05-04 01:34:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:05.326455 | orchestrator | 2025-05-04 01:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:05.326642 | orchestrator | 2025-05-04 01:34:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:08.377733 | orchestrator | 2025-05-04 01:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:08.377879 | orchestrator | 2025-05-04 01:34:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:11.426487 | orchestrator | 2025-05-04 01:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:11.426696 | orchestrator | 2025-05-04 01:34:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:14.482904 | orchestrator | 2025-05-04 01:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:14.483063 | orchestrator | 2025-05-04 01:34:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:17.536590 | orchestrator | 2025-05-04 01:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:17.536768 | orchestrator | 2025-05-04 01:34:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:20.593157 | orchestrator | 2025-05-04 01:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:20.593283 | orchestrator | 2025-05-04 01:34:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:23.661650 | orchestrator | 2025-05-04 01:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:23.661805 | orchestrator | 2025-05-04 01:34:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:26.712333 | orchestrator | 2025-05-04 01:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:26.712490 | orchestrator | 2025-05-04 01:34:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:29.759827 | orchestrator | 2025-05-04 01:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:29.760019 | orchestrator | 2025-05-04 01:34:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:32.814304 | orchestrator | 2025-05-04 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:32.814456 | orchestrator | 2025-05-04 01:34:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:35.862331 | orchestrator | 2025-05-04 01:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:35.862488 | orchestrator | 2025-05-04 01:34:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:38.905186 | orchestrator | 2025-05-04 01:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:38.905337 | orchestrator | 2025-05-04 01:34:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:41.977127 | orchestrator | 2025-05-04 01:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:41.977293 | orchestrator | 2025-05-04 01:34:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:45.031148 | orchestrator | 2025-05-04 01:34:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:45.031298 | orchestrator | 2025-05-04 01:34:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:48.091576 | orchestrator | 2025-05-04 01:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:48.091724 | orchestrator | 2025-05-04 01:34:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:51.144562 | orchestrator | 2025-05-04 01:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:51.144719 | orchestrator | 2025-05-04 01:34:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:54.198014 | orchestrator | 2025-05-04 01:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:54.198232 | orchestrator | 2025-05-04 01:34:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:34:57.244537 | orchestrator | 2025-05-04 01:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:34:57.244684 | orchestrator | 2025-05-04 01:34:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:00.299975 | orchestrator | 2025-05-04 01:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:00.300139 | orchestrator | 2025-05-04 01:35:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:00.303715 | orchestrator | 2025-05-04 01:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:03.361033 | orchestrator | 2025-05-04 01:35:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:06.410540 | orchestrator | 2025-05-04 01:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:06.410689 | orchestrator | 2025-05-04 01:35:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:09.462738 | orchestrator | 2025-05-04 01:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:09.462889 | orchestrator | 2025-05-04 01:35:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:12.520958 | orchestrator | 2025-05-04 01:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:12.521119 | orchestrator | 2025-05-04 01:35:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:15.576239 | orchestrator | 2025-05-04 01:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:15.576385 | orchestrator | 2025-05-04 01:35:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:18.627370 | orchestrator | 2025-05-04 01:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:18.627635 | orchestrator | 2025-05-04 01:35:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:21.680445 | orchestrator | 2025-05-04 01:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:21.680673 | orchestrator | 2025-05-04 01:35:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:24.731995 | orchestrator | 2025-05-04 01:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:24.732163 | orchestrator | 2025-05-04 01:35:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:27.782942 | orchestrator | 2025-05-04 01:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:27.783107 | orchestrator | 2025-05-04 01:35:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:30.841145 | orchestrator | 2025-05-04 01:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:30.841317 | orchestrator | 2025-05-04 01:35:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:33.891284 | orchestrator | 2025-05-04 01:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:33.891543 | orchestrator | 2025-05-04 01:35:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:36.939206 | orchestrator | 2025-05-04 01:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:36.939368 | orchestrator | 2025-05-04 01:35:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:39.989607 | orchestrator | 2025-05-04 01:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:39.989753 | orchestrator | 2025-05-04 01:35:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:43.042245 | orchestrator | 2025-05-04 01:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:43.042407 | orchestrator | 2025-05-04 01:35:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:46.095164 | orchestrator | 2025-05-04 01:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:46.095324 | orchestrator | 2025-05-04 01:35:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:49.147590 | orchestrator | 2025-05-04 01:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:49.147763 | orchestrator | 2025-05-04 01:35:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:52.195063 | orchestrator | 2025-05-04 01:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:52.195233 | orchestrator | 2025-05-04 01:35:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:55.242690 | orchestrator | 2025-05-04 01:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:55.242862 | orchestrator | 2025-05-04 01:35:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:35:58.293258 | orchestrator | 2025-05-04 01:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:35:58.293420 | orchestrator | 2025-05-04 01:35:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:01.343094 | orchestrator | 2025-05-04 01:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:01.343262 | orchestrator | 2025-05-04 01:36:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:04.393252 | orchestrator | 2025-05-04 01:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:04.393460 | orchestrator | 2025-05-04 01:36:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:07.446381 | orchestrator | 2025-05-04 01:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:07.446598 | orchestrator | 2025-05-04 01:36:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:10.498835 | orchestrator | 2025-05-04 01:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:10.498999 | orchestrator | 2025-05-04 01:36:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:13.548186 | orchestrator | 2025-05-04 01:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:13.548390 | orchestrator | 2025-05-04 01:36:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:13.549473 | orchestrator | 2025-05-04 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:16.594398 | orchestrator | 2025-05-04 01:36:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:19.649152 | orchestrator | 2025-05-04 01:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:19.649347 | orchestrator | 2025-05-04 01:36:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:22.699840 | orchestrator | 2025-05-04 01:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:22.700001 | orchestrator | 2025-05-04 01:36:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:25.752608 | orchestrator | 2025-05-04 01:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:25.752833 | orchestrator | 2025-05-04 01:36:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:28.805749 | orchestrator | 2025-05-04 01:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:28.805899 | orchestrator | 2025-05-04 01:36:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:31.860703 | orchestrator | 2025-05-04 01:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:31.860890 | orchestrator | 2025-05-04 01:36:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:34.911862 | orchestrator | 2025-05-04 01:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:34.911979 | orchestrator | 2025-05-04 01:36:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:37.971155 | orchestrator | 2025-05-04 01:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:37.971326 | orchestrator | 2025-05-04 01:36:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:41.022499 | orchestrator | 2025-05-04 01:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:41.022634 | orchestrator | 2025-05-04 01:36:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:44.066875 | orchestrator | 2025-05-04 01:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:44.067038 | orchestrator | 2025-05-04 01:36:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:47.115549 | orchestrator | 2025-05-04 01:36:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:47.115708 | orchestrator | 2025-05-04 01:36:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:50.165016 | orchestrator | 2025-05-04 01:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:50.165222 | orchestrator | 2025-05-04 01:36:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:53.218989 | orchestrator | 2025-05-04 01:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:53.219141 | orchestrator | 2025-05-04 01:36:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:56.263566 | orchestrator | 2025-05-04 01:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:56.263725 | orchestrator | 2025-05-04 01:36:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:36:59.314221 | orchestrator | 2025-05-04 01:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:36:59.314374 | orchestrator | 2025-05-04 01:36:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:02.361102 | orchestrator | 2025-05-04 01:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:02.361256 | orchestrator | 2025-05-04 01:37:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:05.409103 | orchestrator | 2025-05-04 01:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:05.409253 | orchestrator | 2025-05-04 01:37:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:08.462686 | orchestrator | 2025-05-04 01:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:08.462842 | orchestrator | 2025-05-04 01:37:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:11.510244 | orchestrator | 2025-05-04 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:11.510393 | orchestrator | 2025-05-04 01:37:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:14.579863 | orchestrator | 2025-05-04 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:14.580042 | orchestrator | 2025-05-04 01:37:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:17.629065 | orchestrator | 2025-05-04 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:17.629223 | orchestrator | 2025-05-04 01:37:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:20.682466 | orchestrator | 2025-05-04 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:20.682610 | orchestrator | 2025-05-04 01:37:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:23.739837 | orchestrator | 2025-05-04 01:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:23.740014 | orchestrator | 2025-05-04 01:37:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:26.786840 | orchestrator | 2025-05-04 01:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:26.786979 | orchestrator | 2025-05-04 01:37:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:29.836366 | orchestrator | 2025-05-04 01:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:29.836602 | orchestrator | 2025-05-04 01:37:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:32.878492 | orchestrator | 2025-05-04 01:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:32.878670 | orchestrator | 2025-05-04 01:37:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:32.878916 | orchestrator | 2025-05-04 01:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:35.929624 | orchestrator | 2025-05-04 01:37:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:38.986828 | orchestrator | 2025-05-04 01:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:38.986980 | orchestrator | 2025-05-04 01:37:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:42.033832 | orchestrator | 2025-05-04 01:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:42.033962 | orchestrator | 2025-05-04 01:37:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:45.086916 | orchestrator | 2025-05-04 01:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:45.087070 | orchestrator | 2025-05-04 01:37:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:48.133159 | orchestrator | 2025-05-04 01:37:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:48.133332 | orchestrator | 2025-05-04 01:37:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:51.181517 | orchestrator | 2025-05-04 01:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:51.181667 | orchestrator | 2025-05-04 01:37:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:54.227801 | orchestrator | 2025-05-04 01:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:54.227961 | orchestrator | 2025-05-04 01:37:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:37:57.280806 | orchestrator | 2025-05-04 01:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:37:57.280954 | orchestrator | 2025-05-04 01:37:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:00.337949 | orchestrator | 2025-05-04 01:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:00.338098 | orchestrator | 2025-05-04 01:38:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:03.384672 | orchestrator | 2025-05-04 01:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:03.384828 | orchestrator | 2025-05-04 01:38:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:06.432833 | orchestrator | 2025-05-04 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:06.432950 | orchestrator | 2025-05-04 01:38:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:09.488571 | orchestrator | 2025-05-04 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:09.488724 | orchestrator | 2025-05-04 01:38:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:12.537166 | orchestrator | 2025-05-04 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:12.537316 | orchestrator | 2025-05-04 01:38:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:15.591737 | orchestrator | 2025-05-04 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:15.591891 | orchestrator | 2025-05-04 01:38:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:18.639086 | orchestrator | 2025-05-04 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:18.639297 | orchestrator | 2025-05-04 01:38:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:21.690233 | orchestrator | 2025-05-04 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:21.690505 | orchestrator | 2025-05-04 01:38:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:24.737957 | orchestrator | 2025-05-04 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:24.738175 | orchestrator | 2025-05-04 01:38:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:27.789040 | orchestrator | 2025-05-04 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:27.789187 | orchestrator | 2025-05-04 01:38:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:30.840802 | orchestrator | 2025-05-04 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:30.840959 | orchestrator | 2025-05-04 01:38:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:33.893111 | orchestrator | 2025-05-04 01:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:33.893268 | orchestrator | 2025-05-04 01:38:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:36.945725 | orchestrator | 2025-05-04 01:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:36.945886 | orchestrator | 2025-05-04 01:38:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:40.000241 | orchestrator | 2025-05-04 01:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:40.000388 | orchestrator | 2025-05-04 01:38:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:43.044878 | orchestrator | 2025-05-04 01:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:43.045032 | orchestrator | 2025-05-04 01:38:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:46.091667 | orchestrator | 2025-05-04 01:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:46.091821 | orchestrator | 2025-05-04 01:38:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:49.143120 | orchestrator | 2025-05-04 01:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:49.143278 | orchestrator | 2025-05-04 01:38:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:52.193934 | orchestrator | 2025-05-04 01:38:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:52.194159 | orchestrator | 2025-05-04 01:38:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:55.252188 | orchestrator | 2025-05-04 01:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:55.252347 | orchestrator | 2025-05-04 01:38:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:38:58.304137 | orchestrator | 2025-05-04 01:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:38:58.304284 | orchestrator | 2025-05-04 01:38:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:01.357848 | orchestrator | 2025-05-04 01:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:01.358001 | orchestrator | 2025-05-04 01:39:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:04.407719 | orchestrator | 2025-05-04 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:04.407878 | orchestrator | 2025-05-04 01:39:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:07.450792 | orchestrator | 2025-05-04 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:07.450958 | orchestrator | 2025-05-04 01:39:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:10.503026 | orchestrator | 2025-05-04 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:10.503174 | orchestrator | 2025-05-04 01:39:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:13.550011 | orchestrator | 2025-05-04 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:13.550214 | orchestrator | 2025-05-04 01:39:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:16.596905 | orchestrator | 2025-05-04 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:16.597057 | orchestrator | 2025-05-04 01:39:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:19.649026 | orchestrator | 2025-05-04 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:19.649195 | orchestrator | 2025-05-04 01:39:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:22.699914 | orchestrator | 2025-05-04 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:22.700055 | orchestrator | 2025-05-04 01:39:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:25.756610 | orchestrator | 2025-05-04 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:25.756767 | orchestrator | 2025-05-04 01:39:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:28.803456 | orchestrator | 2025-05-04 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:28.803604 | orchestrator | 2025-05-04 01:39:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:31.853785 | orchestrator | 2025-05-04 01:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:31.853945 | orchestrator | 2025-05-04 01:39:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:34.905756 | orchestrator | 2025-05-04 01:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:34.905883 | orchestrator | 2025-05-04 01:39:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:37.951892 | orchestrator | 2025-05-04 01:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:37.952061 | orchestrator | 2025-05-04 01:39:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:41.009845 | orchestrator | 2025-05-04 01:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:41.009995 | orchestrator | 2025-05-04 01:39:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:44.061120 | orchestrator | 2025-05-04 01:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:44.061264 | orchestrator | 2025-05-04 01:39:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:47.116928 | orchestrator | 2025-05-04 01:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:47.117084 | orchestrator | 2025-05-04 01:39:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:50.168947 | orchestrator | 2025-05-04 01:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:50.169102 | orchestrator | 2025-05-04 01:39:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:53.221310 | orchestrator | 2025-05-04 01:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:53.221553 | orchestrator | 2025-05-04 01:39:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:56.273574 | orchestrator | 2025-05-04 01:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:56.273719 | orchestrator | 2025-05-04 01:39:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:39:59.325541 | orchestrator | 2025-05-04 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:39:59.325724 | orchestrator | 2025-05-04 01:39:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:02.368046 | orchestrator | 2025-05-04 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:02.368212 | orchestrator | 2025-05-04 01:40:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:05.416523 | orchestrator | 2025-05-04 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:05.416686 | orchestrator | 2025-05-04 01:40:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:08.462160 | orchestrator | 2025-05-04 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:08.462331 | orchestrator | 2025-05-04 01:40:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:11.504050 | orchestrator | 2025-05-04 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:11.504198 | orchestrator | 2025-05-04 01:40:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:14.547982 | orchestrator | 2025-05-04 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:14.548163 | orchestrator | 2025-05-04 01:40:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:17.606516 | orchestrator | 2025-05-04 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:17.606670 | orchestrator | 2025-05-04 01:40:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:20.662091 | orchestrator | 2025-05-04 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:20.662248 | orchestrator | 2025-05-04 01:40:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:23.708896 | orchestrator | 2025-05-04 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:23.709064 | orchestrator | 2025-05-04 01:40:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:26.759818 | orchestrator | 2025-05-04 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:26.759966 | orchestrator | 2025-05-04 01:40:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:29.813190 | orchestrator | 2025-05-04 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:29.813419 | orchestrator | 2025-05-04 01:40:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:32.871893 | orchestrator | 2025-05-04 01:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:32.872058 | orchestrator | 2025-05-04 01:40:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:35.929308 | orchestrator | 2025-05-04 01:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:35.929578 | orchestrator | 2025-05-04 01:40:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:38.976585 | orchestrator | 2025-05-04 01:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:38.976754 | orchestrator | 2025-05-04 01:40:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:42.028697 | orchestrator | 2025-05-04 01:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:42.028860 | orchestrator | 2025-05-04 01:40:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:45.077493 | orchestrator | 2025-05-04 01:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:45.077648 | orchestrator | 2025-05-04 01:40:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:48.121858 | orchestrator | 2025-05-04 01:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:48.122005 | orchestrator | 2025-05-04 01:40:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:51.175032 | orchestrator | 2025-05-04 01:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:51.175190 | orchestrator | 2025-05-04 01:40:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:54.229490 | orchestrator | 2025-05-04 01:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:54.229638 | orchestrator | 2025-05-04 01:40:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:40:57.284652 | orchestrator | 2025-05-04 01:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:40:57.284800 | orchestrator | 2025-05-04 01:40:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:00.337669 | orchestrator | 2025-05-04 01:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:00.337825 | orchestrator | 2025-05-04 01:41:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:03.394762 | orchestrator | 2025-05-04 01:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:03.394913 | orchestrator | 2025-05-04 01:41:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:06.444435 | orchestrator | 2025-05-04 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:06.444582 | orchestrator | 2025-05-04 01:41:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:09.484722 | orchestrator | 2025-05-04 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:09.484876 | orchestrator | 2025-05-04 01:41:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:12.537205 | orchestrator | 2025-05-04 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:12.537400 | orchestrator | 2025-05-04 01:41:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:15.589384 | orchestrator | 2025-05-04 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:15.589558 | orchestrator | 2025-05-04 01:41:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:18.640477 | orchestrator | 2025-05-04 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:18.640642 | orchestrator | 2025-05-04 01:41:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:21.686511 | orchestrator | 2025-05-04 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:21.686686 | orchestrator | 2025-05-04 01:41:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:24.737119 | orchestrator | 2025-05-04 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:24.737310 | orchestrator | 2025-05-04 01:41:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:27.794656 | orchestrator | 2025-05-04 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:27.794842 | orchestrator | 2025-05-04 01:41:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:30.848467 | orchestrator | 2025-05-04 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:30.848638 | orchestrator | 2025-05-04 01:41:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:33.893544 | orchestrator | 2025-05-04 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:33.893709 | orchestrator | 2025-05-04 01:41:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:36.940049 | orchestrator | 2025-05-04 01:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:36.940207 | orchestrator | 2025-05-04 01:41:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:39.993247 | orchestrator | 2025-05-04 01:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:39.993465 | orchestrator | 2025-05-04 01:41:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:39.993649 | orchestrator | 2025-05-04 01:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:43.045552 | orchestrator | 2025-05-04 01:41:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:46.094175 | orchestrator | 2025-05-04 01:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:46.094328 | orchestrator | 2025-05-04 01:41:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:49.141738 | orchestrator | 2025-05-04 01:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:49.141950 | orchestrator | 2025-05-04 01:41:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:52.184897 | orchestrator | 2025-05-04 01:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:52.185071 | orchestrator | 2025-05-04 01:41:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:55.240834 | orchestrator | 2025-05-04 01:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:55.240989 | orchestrator | 2025-05-04 01:41:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:41:58.292264 | orchestrator | 2025-05-04 01:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:41:58.292486 | orchestrator | 2025-05-04 01:41:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:01.342986 | orchestrator | 2025-05-04 01:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:01.343155 | orchestrator | 2025-05-04 01:42:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:04.397269 | orchestrator | 2025-05-04 01:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:04.397453 | orchestrator | 2025-05-04 01:42:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:07.449187 | orchestrator | 2025-05-04 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:07.449342 | orchestrator | 2025-05-04 01:42:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:10.500886 | orchestrator | 2025-05-04 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:10.501040 | orchestrator | 2025-05-04 01:42:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:13.551918 | orchestrator | 2025-05-04 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:13.552068 | orchestrator | 2025-05-04 01:42:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:16.604038 | orchestrator | 2025-05-04 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:16.604186 | orchestrator | 2025-05-04 01:42:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:19.653227 | orchestrator | 2025-05-04 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:19.653450 | orchestrator | 2025-05-04 01:42:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:22.707242 | orchestrator | 2025-05-04 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:22.707450 | orchestrator | 2025-05-04 01:42:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:25.759925 | orchestrator | 2025-05-04 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:25.760106 | orchestrator | 2025-05-04 01:42:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:28.816901 | orchestrator | 2025-05-04 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:28.817049 | orchestrator | 2025-05-04 01:42:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:31.866410 | orchestrator | 2025-05-04 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:31.866573 | orchestrator | 2025-05-04 01:42:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:34.913434 | orchestrator | 2025-05-04 01:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:34.913627 | orchestrator | 2025-05-04 01:42:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:37.969589 | orchestrator | 2025-05-04 01:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:37.969786 | orchestrator | 2025-05-04 01:42:37 | INFO  | Task a3955107-6ba0-4aa0-a12b-0489ae26a62a is in state STARTED 2025-05-04 01:42:37.971135 | orchestrator | 2025-05-04 01:42:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:41.035898 | orchestrator | 2025-05-04 01:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:41.036051 | orchestrator | 2025-05-04 01:42:41 | INFO  | Task a3955107-6ba0-4aa0-a12b-0489ae26a62a is in state STARTED 2025-05-04 01:42:41.038884 | orchestrator | 2025-05-04 01:42:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:44.091872 | orchestrator | 2025-05-04 01:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:44.092063 | orchestrator | 2025-05-04 01:42:44 | INFO  | Task a3955107-6ba0-4aa0-a12b-0489ae26a62a is in state STARTED 2025-05-04 01:42:44.092168 | orchestrator | 2025-05-04 01:42:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:47.152882 | orchestrator | 2025-05-04 01:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:47.153031 | orchestrator | 2025-05-04 01:42:47 | INFO  | Task a3955107-6ba0-4aa0-a12b-0489ae26a62a is in state STARTED 2025-05-04 01:42:47.153795 | orchestrator | 2025-05-04 01:42:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:50.201744 | orchestrator | 2025-05-04 01:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:50.201903 | orchestrator | 2025-05-04 01:42:50 | INFO  | Task a3955107-6ba0-4aa0-a12b-0489ae26a62a is in state SUCCESS 2025-05-04 01:42:50.204088 | orchestrator | 2025-05-04 01:42:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:53.252456 | orchestrator | 2025-05-04 01:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:53.252643 | orchestrator | 2025-05-04 01:42:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:56.298211 | orchestrator | 2025-05-04 01:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:56.298473 | orchestrator | 2025-05-04 01:42:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:42:59.349977 | orchestrator | 2025-05-04 01:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:42:59.350188 | orchestrator | 2025-05-04 01:42:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:02.398886 | orchestrator | 2025-05-04 01:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:02.399047 | orchestrator | 2025-05-04 01:43:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:05.457032 | orchestrator | 2025-05-04 01:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:05.457193 | orchestrator | 2025-05-04 01:43:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:08.508992 | orchestrator | 2025-05-04 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:08.509194 | orchestrator | 2025-05-04 01:43:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:11.559060 | orchestrator | 2025-05-04 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:11.559248 | orchestrator | 2025-05-04 01:43:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:14.608011 | orchestrator | 2025-05-04 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:14.608129 | orchestrator | 2025-05-04 01:43:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:17.660462 | orchestrator | 2025-05-04 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:17.660637 | orchestrator | 2025-05-04 01:43:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:20.712811 | orchestrator | 2025-05-04 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:20.712965 | orchestrator | 2025-05-04 01:43:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:23.764544 | orchestrator | 2025-05-04 01:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:23.764695 | orchestrator | 2025-05-04 01:43:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:26.814120 | orchestrator | 2025-05-04 01:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:26.814279 | orchestrator | 2025-05-04 01:43:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:29.873408 | orchestrator | 2025-05-04 01:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:29.873569 | orchestrator | 2025-05-04 01:43:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:32.925864 | orchestrator | 2025-05-04 01:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:32.926117 | orchestrator | 2025-05-04 01:43:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:35.969019 | orchestrator | 2025-05-04 01:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:35.969186 | orchestrator | 2025-05-04 01:43:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:39.027699 | orchestrator | 2025-05-04 01:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:39.027810 | orchestrator | 2025-05-04 01:43:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:42.093190 | orchestrator | 2025-05-04 01:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:42.093343 | orchestrator | 2025-05-04 01:43:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:45.141321 | orchestrator | 2025-05-04 01:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:45.141460 | orchestrator | 2025-05-04 01:43:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:48.196766 | orchestrator | 2025-05-04 01:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:48.196932 | orchestrator | 2025-05-04 01:43:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:51.252393 | orchestrator | 2025-05-04 01:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:51.252544 | orchestrator | 2025-05-04 01:43:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:54.296859 | orchestrator | 2025-05-04 01:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:54.297016 | orchestrator | 2025-05-04 01:43:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:43:54.297224 | orchestrator | 2025-05-04 01:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:43:57.346264 | orchestrator | 2025-05-04 01:43:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:00.390566 | orchestrator | 2025-05-04 01:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:00.390735 | orchestrator | 2025-05-04 01:44:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:03.450539 | orchestrator | 2025-05-04 01:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:03.450755 | orchestrator | 2025-05-04 01:44:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:06.498343 | orchestrator | 2025-05-04 01:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:06.498536 | orchestrator | 2025-05-04 01:44:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:09.549828 | orchestrator | 2025-05-04 01:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:09.549994 | orchestrator | 2025-05-04 01:44:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:12.603966 | orchestrator | 2025-05-04 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:12.604149 | orchestrator | 2025-05-04 01:44:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:15.658416 | orchestrator | 2025-05-04 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:15.658577 | orchestrator | 2025-05-04 01:44:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:18.719713 | orchestrator | 2025-05-04 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:18.719896 | orchestrator | 2025-05-04 01:44:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:21.774782 | orchestrator | 2025-05-04 01:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:21.774930 | orchestrator | 2025-05-04 01:44:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:24.825899 | orchestrator | 2025-05-04 01:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:24.826103 | orchestrator | 2025-05-04 01:44:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:27.874741 | orchestrator | 2025-05-04 01:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:27.874895 | orchestrator | 2025-05-04 01:44:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:30.924833 | orchestrator | 2025-05-04 01:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:30.924937 | orchestrator | 2025-05-04 01:44:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:33.975278 | orchestrator | 2025-05-04 01:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:33.975465 | orchestrator | 2025-05-04 01:44:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:37.020996 | orchestrator | 2025-05-04 01:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:37.021152 | orchestrator | 2025-05-04 01:44:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:40.073713 | orchestrator | 2025-05-04 01:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:40.073871 | orchestrator | 2025-05-04 01:44:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:43.127867 | orchestrator | 2025-05-04 01:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:43.128003 | orchestrator | 2025-05-04 01:44:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:46.169036 | orchestrator | 2025-05-04 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:46.169188 | orchestrator | 2025-05-04 01:44:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:49.219692 | orchestrator | 2025-05-04 01:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:49.219847 | orchestrator | 2025-05-04 01:44:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:52.277842 | orchestrator | 2025-05-04 01:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:52.277996 | orchestrator | 2025-05-04 01:44:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:55.327860 | orchestrator | 2025-05-04 01:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:55.328017 | orchestrator | 2025-05-04 01:44:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:44:58.382857 | orchestrator | 2025-05-04 01:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:44:58.383030 | orchestrator | 2025-05-04 01:44:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:01.426187 | orchestrator | 2025-05-04 01:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:01.426352 | orchestrator | 2025-05-04 01:45:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:04.477317 | orchestrator | 2025-05-04 01:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:04.477621 | orchestrator | 2025-05-04 01:45:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:07.530643 | orchestrator | 2025-05-04 01:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:07.530805 | orchestrator | 2025-05-04 01:45:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:10.581729 | orchestrator | 2025-05-04 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:10.581882 | orchestrator | 2025-05-04 01:45:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:13.635252 | orchestrator | 2025-05-04 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:13.635442 | orchestrator | 2025-05-04 01:45:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:16.682850 | orchestrator | 2025-05-04 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:16.683010 | orchestrator | 2025-05-04 01:45:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:19.731665 | orchestrator | 2025-05-04 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:19.731810 | orchestrator | 2025-05-04 01:45:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:22.790179 | orchestrator | 2025-05-04 01:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:22.790334 | orchestrator | 2025-05-04 01:45:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:25.838703 | orchestrator | 2025-05-04 01:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:25.838881 | orchestrator | 2025-05-04 01:45:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:28.887546 | orchestrator | 2025-05-04 01:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:28.887720 | orchestrator | 2025-05-04 01:45:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:31.934918 | orchestrator | 2025-05-04 01:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:31.935076 | orchestrator | 2025-05-04 01:45:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:34.992532 | orchestrator | 2025-05-04 01:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:34.992677 | orchestrator | 2025-05-04 01:45:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:38.044353 | orchestrator | 2025-05-04 01:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:38.044595 | orchestrator | 2025-05-04 01:45:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:41.094686 | orchestrator | 2025-05-04 01:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:41.094846 | orchestrator | 2025-05-04 01:45:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:44.144169 | orchestrator | 2025-05-04 01:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:44.144286 | orchestrator | 2025-05-04 01:45:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:47.188566 | orchestrator | 2025-05-04 01:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:47.188701 | orchestrator | 2025-05-04 01:45:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:50.239302 | orchestrator | 2025-05-04 01:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:50.239513 | orchestrator | 2025-05-04 01:45:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:50.239596 | orchestrator | 2025-05-04 01:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:53.283897 | orchestrator | 2025-05-04 01:45:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:56.334565 | orchestrator | 2025-05-04 01:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:56.334720 | orchestrator | 2025-05-04 01:45:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:45:59.387657 | orchestrator | 2025-05-04 01:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:45:59.387814 | orchestrator | 2025-05-04 01:45:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:02.443487 | orchestrator | 2025-05-04 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:02.443640 | orchestrator | 2025-05-04 01:46:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:05.493368 | orchestrator | 2025-05-04 01:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:05.493593 | orchestrator | 2025-05-04 01:46:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:08.545677 | orchestrator | 2025-05-04 01:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:08.545820 | orchestrator | 2025-05-04 01:46:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:11.597146 | orchestrator | 2025-05-04 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:11.597301 | orchestrator | 2025-05-04 01:46:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:14.648191 | orchestrator | 2025-05-04 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:14.648367 | orchestrator | 2025-05-04 01:46:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:17.696057 | orchestrator | 2025-05-04 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:17.696220 | orchestrator | 2025-05-04 01:46:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:20.746822 | orchestrator | 2025-05-04 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:20.747010 | orchestrator | 2025-05-04 01:46:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:23.801057 | orchestrator | 2025-05-04 01:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:23.801214 | orchestrator | 2025-05-04 01:46:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:26.850556 | orchestrator | 2025-05-04 01:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:26.850725 | orchestrator | 2025-05-04 01:46:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:29.900525 | orchestrator | 2025-05-04 01:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:29.900718 | orchestrator | 2025-05-04 01:46:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:32.947604 | orchestrator | 2025-05-04 01:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:32.947751 | orchestrator | 2025-05-04 01:46:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:35.995335 | orchestrator | 2025-05-04 01:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:35.995603 | orchestrator | 2025-05-04 01:46:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:39.043197 | orchestrator | 2025-05-04 01:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:39.043374 | orchestrator | 2025-05-04 01:46:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:42.104626 | orchestrator | 2025-05-04 01:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:42.104781 | orchestrator | 2025-05-04 01:46:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:45.150496 | orchestrator | 2025-05-04 01:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:45.150649 | orchestrator | 2025-05-04 01:46:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:48.201019 | orchestrator | 2025-05-04 01:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:48.201178 | orchestrator | 2025-05-04 01:46:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:51.250366 | orchestrator | 2025-05-04 01:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:51.250564 | orchestrator | 2025-05-04 01:46:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:54.302891 | orchestrator | 2025-05-04 01:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:54.303031 | orchestrator | 2025-05-04 01:46:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:46:57.343356 | orchestrator | 2025-05-04 01:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:46:57.343550 | orchestrator | 2025-05-04 01:46:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:00.394995 | orchestrator | 2025-05-04 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:00.395155 | orchestrator | 2025-05-04 01:47:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:00.395341 | orchestrator | 2025-05-04 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:03.444946 | orchestrator | 2025-05-04 01:47:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:06.492335 | orchestrator | 2025-05-04 01:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:06.492559 | orchestrator | 2025-05-04 01:47:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:09.543725 | orchestrator | 2025-05-04 01:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:09.543914 | orchestrator | 2025-05-04 01:47:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:12.588886 | orchestrator | 2025-05-04 01:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:12.589034 | orchestrator | 2025-05-04 01:47:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:15.634570 | orchestrator | 2025-05-04 01:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:15.634748 | orchestrator | 2025-05-04 01:47:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:18.688418 | orchestrator | 2025-05-04 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:18.688591 | orchestrator | 2025-05-04 01:47:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:18.688845 | orchestrator | 2025-05-04 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:21.744974 | orchestrator | 2025-05-04 01:47:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:24.801299 | orchestrator | 2025-05-04 01:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:24.801501 | orchestrator | 2025-05-04 01:47:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:27.854440 | orchestrator | 2025-05-04 01:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:27.854608 | orchestrator | 2025-05-04 01:47:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:30.906534 | orchestrator | 2025-05-04 01:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:30.906707 | orchestrator | 2025-05-04 01:47:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:33.961842 | orchestrator | 2025-05-04 01:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:33.961996 | orchestrator | 2025-05-04 01:47:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:37.018272 | orchestrator | 2025-05-04 01:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:37.018486 | orchestrator | 2025-05-04 01:47:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:40.071136 | orchestrator | 2025-05-04 01:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:40.071292 | orchestrator | 2025-05-04 01:47:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:43.123267 | orchestrator | 2025-05-04 01:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:43.123475 | orchestrator | 2025-05-04 01:47:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:46.171426 | orchestrator | 2025-05-04 01:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:46.171584 | orchestrator | 2025-05-04 01:47:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:49.226151 | orchestrator | 2025-05-04 01:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:49.226311 | orchestrator | 2025-05-04 01:47:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:52.278198 | orchestrator | 2025-05-04 01:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:52.278332 | orchestrator | 2025-05-04 01:47:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:55.331169 | orchestrator | 2025-05-04 01:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:55.331355 | orchestrator | 2025-05-04 01:47:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:47:58.384585 | orchestrator | 2025-05-04 01:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:47:58.384751 | orchestrator | 2025-05-04 01:47:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:01.449102 | orchestrator | 2025-05-04 01:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:01.449253 | orchestrator | 2025-05-04 01:48:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:04.496298 | orchestrator | 2025-05-04 01:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:04.496517 | orchestrator | 2025-05-04 01:48:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:07.547471 | orchestrator | 2025-05-04 01:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:07.547616 | orchestrator | 2025-05-04 01:48:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:10.598280 | orchestrator | 2025-05-04 01:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:10.598462 | orchestrator | 2025-05-04 01:48:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:13.645459 | orchestrator | 2025-05-04 01:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:13.645615 | orchestrator | 2025-05-04 01:48:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:16.694289 | orchestrator | 2025-05-04 01:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:16.694550 | orchestrator | 2025-05-04 01:48:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:19.746918 | orchestrator | 2025-05-04 01:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:19.747066 | orchestrator | 2025-05-04 01:48:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:22.792468 | orchestrator | 2025-05-04 01:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:22.792628 | orchestrator | 2025-05-04 01:48:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:25.838337 | orchestrator | 2025-05-04 01:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:25.838546 | orchestrator | 2025-05-04 01:48:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:28.895075 | orchestrator | 2025-05-04 01:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:28.895232 | orchestrator | 2025-05-04 01:48:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:31.945947 | orchestrator | 2025-05-04 01:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:31.946166 | orchestrator | 2025-05-04 01:48:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:35.002891 | orchestrator | 2025-05-04 01:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:35.003051 | orchestrator | 2025-05-04 01:48:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:38.060208 | orchestrator | 2025-05-04 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:38.060357 | orchestrator | 2025-05-04 01:48:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:41.107781 | orchestrator | 2025-05-04 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:41.107940 | orchestrator | 2025-05-04 01:48:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:44.153634 | orchestrator | 2025-05-04 01:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:44.153788 | orchestrator | 2025-05-04 01:48:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:47.202374 | orchestrator | 2025-05-04 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:47.202544 | orchestrator | 2025-05-04 01:48:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:50.251734 | orchestrator | 2025-05-04 01:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:50.251891 | orchestrator | 2025-05-04 01:48:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:53.301858 | orchestrator | 2025-05-04 01:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:53.302108 | orchestrator | 2025-05-04 01:48:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:56.352674 | orchestrator | 2025-05-04 01:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:56.352816 | orchestrator | 2025-05-04 01:48:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:48:59.405262 | orchestrator | 2025-05-04 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:48:59.405471 | orchestrator | 2025-05-04 01:48:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:02.456020 | orchestrator | 2025-05-04 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:02.456131 | orchestrator | 2025-05-04 01:49:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:05.512905 | orchestrator | 2025-05-04 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:05.513060 | orchestrator | 2025-05-04 01:49:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:08.566163 | orchestrator | 2025-05-04 01:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:08.566295 | orchestrator | 2025-05-04 01:49:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:11.615844 | orchestrator | 2025-05-04 01:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:11.615999 | orchestrator | 2025-05-04 01:49:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:14.660378 | orchestrator | 2025-05-04 01:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:14.660554 | orchestrator | 2025-05-04 01:49:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:17.713110 | orchestrator | 2025-05-04 01:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:17.713251 | orchestrator | 2025-05-04 01:49:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:17.713429 | orchestrator | 2025-05-04 01:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:20.756074 | orchestrator | 2025-05-04 01:49:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:23.802360 | orchestrator | 2025-05-04 01:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:23.802563 | orchestrator | 2025-05-04 01:49:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:26.855486 | orchestrator | 2025-05-04 01:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:26.855652 | orchestrator | 2025-05-04 01:49:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:29.901425 | orchestrator | 2025-05-04 01:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:29.901573 | orchestrator | 2025-05-04 01:49:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:32.947959 | orchestrator | 2025-05-04 01:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:32.948150 | orchestrator | 2025-05-04 01:49:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:35.998871 | orchestrator | 2025-05-04 01:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:35.999022 | orchestrator | 2025-05-04 01:49:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:39.046435 | orchestrator | 2025-05-04 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:39.046590 | orchestrator | 2025-05-04 01:49:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:42.101554 | orchestrator | 2025-05-04 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:42.101705 | orchestrator | 2025-05-04 01:49:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:45.150973 | orchestrator | 2025-05-04 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:45.151127 | orchestrator | 2025-05-04 01:49:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:48.197859 | orchestrator | 2025-05-04 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:48.198090 | orchestrator | 2025-05-04 01:49:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:51.247591 | orchestrator | 2025-05-04 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:51.247753 | orchestrator | 2025-05-04 01:49:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:54.296539 | orchestrator | 2025-05-04 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:54.296685 | orchestrator | 2025-05-04 01:49:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:49:57.346489 | orchestrator | 2025-05-04 01:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:49:57.346647 | orchestrator | 2025-05-04 01:49:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:00.400015 | orchestrator | 2025-05-04 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:00.400127 | orchestrator | 2025-05-04 01:50:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:03.447144 | orchestrator | 2025-05-04 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:03.447306 | orchestrator | 2025-05-04 01:50:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:06.498371 | orchestrator | 2025-05-04 01:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:06.498569 | orchestrator | 2025-05-04 01:50:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:09.546567 | orchestrator | 2025-05-04 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:09.546724 | orchestrator | 2025-05-04 01:50:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:12.593620 | orchestrator | 2025-05-04 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:12.593773 | orchestrator | 2025-05-04 01:50:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:15.651641 | orchestrator | 2025-05-04 01:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:15.651796 | orchestrator | 2025-05-04 01:50:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:18.696996 | orchestrator | 2025-05-04 01:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:18.697197 | orchestrator | 2025-05-04 01:50:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:18.697298 | orchestrator | 2025-05-04 01:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:21.757723 | orchestrator | 2025-05-04 01:50:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:24.805687 | orchestrator | 2025-05-04 01:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:24.805807 | orchestrator | 2025-05-04 01:50:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:27.852319 | orchestrator | 2025-05-04 01:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:27.852545 | orchestrator | 2025-05-04 01:50:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:30.908859 | orchestrator | 2025-05-04 01:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:30.909012 | orchestrator | 2025-05-04 01:50:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:33.959890 | orchestrator | 2025-05-04 01:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:33.960124 | orchestrator | 2025-05-04 01:50:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:33.960254 | orchestrator | 2025-05-04 01:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:37.014346 | orchestrator | 2025-05-04 01:50:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:40.064463 | orchestrator | 2025-05-04 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:40.064620 | orchestrator | 2025-05-04 01:50:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:43.111687 | orchestrator | 2025-05-04 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:43.111844 | orchestrator | 2025-05-04 01:50:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:46.166180 | orchestrator | 2025-05-04 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:46.166336 | orchestrator | 2025-05-04 01:50:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:49.221603 | orchestrator | 2025-05-04 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:49.221747 | orchestrator | 2025-05-04 01:50:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:52.279357 | orchestrator | 2025-05-04 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:52.279572 | orchestrator | 2025-05-04 01:50:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:55.329541 | orchestrator | 2025-05-04 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:55.329670 | orchestrator | 2025-05-04 01:50:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:50:58.381801 | orchestrator | 2025-05-04 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:50:58.381957 | orchestrator | 2025-05-04 01:50:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:01.475074 | orchestrator | 2025-05-04 01:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:01.475232 | orchestrator | 2025-05-04 01:51:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:04.550221 | orchestrator | 2025-05-04 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:04.550426 | orchestrator | 2025-05-04 01:51:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:07.593269 | orchestrator | 2025-05-04 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:07.593526 | orchestrator | 2025-05-04 01:51:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:10.656716 | orchestrator | 2025-05-04 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:10.656872 | orchestrator | 2025-05-04 01:51:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:13.712056 | orchestrator | 2025-05-04 01:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:13.712229 | orchestrator | 2025-05-04 01:51:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:16.760075 | orchestrator | 2025-05-04 01:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:16.760227 | orchestrator | 2025-05-04 01:51:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:19.810437 | orchestrator | 2025-05-04 01:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:19.810602 | orchestrator | 2025-05-04 01:51:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:22.866822 | orchestrator | 2025-05-04 01:51:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:22.866986 | orchestrator | 2025-05-04 01:51:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:25.918880 | orchestrator | 2025-05-04 01:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:25.919026 | orchestrator | 2025-05-04 01:51:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:28.968251 | orchestrator | 2025-05-04 01:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:28.968373 | orchestrator | 2025-05-04 01:51:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:32.015372 | orchestrator | 2025-05-04 01:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:32.015582 | orchestrator | 2025-05-04 01:51:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:35.070298 | orchestrator | 2025-05-04 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:35.070520 | orchestrator | 2025-05-04 01:51:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:38.123820 | orchestrator | 2025-05-04 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:38.123982 | orchestrator | 2025-05-04 01:51:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:41.178503 | orchestrator | 2025-05-04 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:41.178673 | orchestrator | 2025-05-04 01:51:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:44.229577 | orchestrator | 2025-05-04 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:44.229736 | orchestrator | 2025-05-04 01:51:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:47.279221 | orchestrator | 2025-05-04 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:47.279415 | orchestrator | 2025-05-04 01:51:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:50.335428 | orchestrator | 2025-05-04 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:50.335621 | orchestrator | 2025-05-04 01:51:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:53.390914 | orchestrator | 2025-05-04 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:53.391076 | orchestrator | 2025-05-04 01:51:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:56.439805 | orchestrator | 2025-05-04 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:56.439952 | orchestrator | 2025-05-04 01:51:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:51:59.490699 | orchestrator | 2025-05-04 01:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:51:59.490855 | orchestrator | 2025-05-04 01:51:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:02.536644 | orchestrator | 2025-05-04 01:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:02.536805 | orchestrator | 2025-05-04 01:52:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:05.583712 | orchestrator | 2025-05-04 01:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:05.583855 | orchestrator | 2025-05-04 01:52:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:08.627187 | orchestrator | 2025-05-04 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:08.627338 | orchestrator | 2025-05-04 01:52:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:11.678515 | orchestrator | 2025-05-04 01:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:11.678671 | orchestrator | 2025-05-04 01:52:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:14.732260 | orchestrator | 2025-05-04 01:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:14.732425 | orchestrator | 2025-05-04 01:52:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:17.785177 | orchestrator | 2025-05-04 01:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:17.785330 | orchestrator | 2025-05-04 01:52:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:20.832430 | orchestrator | 2025-05-04 01:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:20.832621 | orchestrator | 2025-05-04 01:52:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:23.884818 | orchestrator | 2025-05-04 01:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:23.885002 | orchestrator | 2025-05-04 01:52:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:26.937595 | orchestrator | 2025-05-04 01:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:26.937722 | orchestrator | 2025-05-04 01:52:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:29.988928 | orchestrator | 2025-05-04 01:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:29.989157 | orchestrator | 2025-05-04 01:52:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:33.045425 | orchestrator | 2025-05-04 01:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:33.045581 | orchestrator | 2025-05-04 01:52:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:36.094771 | orchestrator | 2025-05-04 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:36.094951 | orchestrator | 2025-05-04 01:52:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:39.161834 | orchestrator | 2025-05-04 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:39.161996 | orchestrator | 2025-05-04 01:52:39 | INFO  | Task 633ead8d-cd5e-4fd7-ad7e-45b94a60137b is in state STARTED 2025-05-04 01:52:39.162446 | orchestrator | 2025-05-04 01:52:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:39.162543 | orchestrator | 2025-05-04 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:42.219042 | orchestrator | 2025-05-04 01:52:42 | INFO  | Task 633ead8d-cd5e-4fd7-ad7e-45b94a60137b is in state STARTED 2025-05-04 01:52:42.220350 | orchestrator | 2025-05-04 01:52:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:42.221199 | orchestrator | 2025-05-04 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:45.288927 | orchestrator | 2025-05-04 01:52:45 | INFO  | Task 633ead8d-cd5e-4fd7-ad7e-45b94a60137b is in state STARTED 2025-05-04 01:52:45.290168 | orchestrator | 2025-05-04 01:52:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:48.341116 | orchestrator | 2025-05-04 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:48.341245 | orchestrator | 2025-05-04 01:52:48 | INFO  | Task 633ead8d-cd5e-4fd7-ad7e-45b94a60137b is in state SUCCESS 2025-05-04 01:52:48.342701 | orchestrator | 2025-05-04 01:52:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:51.392708 | orchestrator | 2025-05-04 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:51.392845 | orchestrator | 2025-05-04 01:52:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:54.446318 | orchestrator | 2025-05-04 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:54.446565 | orchestrator | 2025-05-04 01:52:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:52:57.489576 | orchestrator | 2025-05-04 01:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:52:57.489734 | orchestrator | 2025-05-04 01:52:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:00.544016 | orchestrator | 2025-05-04 01:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:00.544176 | orchestrator | 2025-05-04 01:53:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:03.595750 | orchestrator | 2025-05-04 01:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:03.595903 | orchestrator | 2025-05-04 01:53:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:06.643607 | orchestrator | 2025-05-04 01:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:06.643846 | orchestrator | 2025-05-04 01:53:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:09.699611 | orchestrator | 2025-05-04 01:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:09.699761 | orchestrator | 2025-05-04 01:53:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:12.746166 | orchestrator | 2025-05-04 01:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:12.746336 | orchestrator | 2025-05-04 01:53:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:15.794469 | orchestrator | 2025-05-04 01:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:15.794627 | orchestrator | 2025-05-04 01:53:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:18.844312 | orchestrator | 2025-05-04 01:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:18.844543 | orchestrator | 2025-05-04 01:53:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:21.893358 | orchestrator | 2025-05-04 01:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:21.893569 | orchestrator | 2025-05-04 01:53:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:24.948609 | orchestrator | 2025-05-04 01:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:24.948808 | orchestrator | 2025-05-04 01:53:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:28.000805 | orchestrator | 2025-05-04 01:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:28.000967 | orchestrator | 2025-05-04 01:53:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:31.059262 | orchestrator | 2025-05-04 01:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:31.059463 | orchestrator | 2025-05-04 01:53:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:34.111571 | orchestrator | 2025-05-04 01:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:34.111718 | orchestrator | 2025-05-04 01:53:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:37.168455 | orchestrator | 2025-05-04 01:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:37.168613 | orchestrator | 2025-05-04 01:53:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:40.223119 | orchestrator | 2025-05-04 01:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:40.223286 | orchestrator | 2025-05-04 01:53:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:43.277289 | orchestrator | 2025-05-04 01:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:43.277573 | orchestrator | 2025-05-04 01:53:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:46.324462 | orchestrator | 2025-05-04 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:46.324636 | orchestrator | 2025-05-04 01:53:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:49.377175 | orchestrator | 2025-05-04 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:49.377344 | orchestrator | 2025-05-04 01:53:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:52.428584 | orchestrator | 2025-05-04 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:52.428793 | orchestrator | 2025-05-04 01:53:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:55.482823 | orchestrator | 2025-05-04 01:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:55.482986 | orchestrator | 2025-05-04 01:53:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:53:58.537067 | orchestrator | 2025-05-04 01:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:53:58.537236 | orchestrator | 2025-05-04 01:53:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:01.589623 | orchestrator | 2025-05-04 01:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:01.589806 | orchestrator | 2025-05-04 01:54:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:04.640942 | orchestrator | 2025-05-04 01:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:04.641141 | orchestrator | 2025-05-04 01:54:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:07.694654 | orchestrator | 2025-05-04 01:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:07.694803 | orchestrator | 2025-05-04 01:54:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:10.749045 | orchestrator | 2025-05-04 01:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:10.749205 | orchestrator | 2025-05-04 01:54:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:13.803506 | orchestrator | 2025-05-04 01:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:13.803695 | orchestrator | 2025-05-04 01:54:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:16.854663 | orchestrator | 2025-05-04 01:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:16.854801 | orchestrator | 2025-05-04 01:54:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:19.905126 | orchestrator | 2025-05-04 01:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:19.905275 | orchestrator | 2025-05-04 01:54:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:22.955784 | orchestrator | 2025-05-04 01:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:22.955950 | orchestrator | 2025-05-04 01:54:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:26.002482 | orchestrator | 2025-05-04 01:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:26.002634 | orchestrator | 2025-05-04 01:54:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:29.044073 | orchestrator | 2025-05-04 01:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:29.044181 | orchestrator | 2025-05-04 01:54:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:32.099331 | orchestrator | 2025-05-04 01:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:32.099540 | orchestrator | 2025-05-04 01:54:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:35.146160 | orchestrator | 2025-05-04 01:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:35.146322 | orchestrator | 2025-05-04 01:54:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:38.201289 | orchestrator | 2025-05-04 01:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:38.201512 | orchestrator | 2025-05-04 01:54:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:41.256984 | orchestrator | 2025-05-04 01:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:41.257122 | orchestrator | 2025-05-04 01:54:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:44.303262 | orchestrator | 2025-05-04 01:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:44.303459 | orchestrator | 2025-05-04 01:54:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:47.353656 | orchestrator | 2025-05-04 01:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:47.353819 | orchestrator | 2025-05-04 01:54:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:50.404612 | orchestrator | 2025-05-04 01:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:50.404852 | orchestrator | 2025-05-04 01:54:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:53.450508 | orchestrator | 2025-05-04 01:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:53.450684 | orchestrator | 2025-05-04 01:54:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:56.499309 | orchestrator | 2025-05-04 01:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:56.499505 | orchestrator | 2025-05-04 01:54:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:54:59.551406 | orchestrator | 2025-05-04 01:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:54:59.551563 | orchestrator | 2025-05-04 01:54:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:02.605671 | orchestrator | 2025-05-04 01:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:02.605824 | orchestrator | 2025-05-04 01:55:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:05.666443 | orchestrator | 2025-05-04 01:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:05.666618 | orchestrator | 2025-05-04 01:55:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:08.717515 | orchestrator | 2025-05-04 01:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:08.717671 | orchestrator | 2025-05-04 01:55:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:11.759505 | orchestrator | 2025-05-04 01:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:11.759660 | orchestrator | 2025-05-04 01:55:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:14.806202 | orchestrator | 2025-05-04 01:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:14.806350 | orchestrator | 2025-05-04 01:55:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:17.851664 | orchestrator | 2025-05-04 01:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:17.851876 | orchestrator | 2025-05-04 01:55:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:20.901432 | orchestrator | 2025-05-04 01:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:20.901581 | orchestrator | 2025-05-04 01:55:20 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:23.953070 | orchestrator | 2025-05-04 01:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:23.953240 | orchestrator | 2025-05-04 01:55:23 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:27.006425 | orchestrator | 2025-05-04 01:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:27.006572 | orchestrator | 2025-05-04 01:55:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:30.059439 | orchestrator | 2025-05-04 01:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:30.059578 | orchestrator | 2025-05-04 01:55:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:33.113420 | orchestrator | 2025-05-04 01:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:33.113571 | orchestrator | 2025-05-04 01:55:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:36.161826 | orchestrator | 2025-05-04 01:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:36.162080 | orchestrator | 2025-05-04 01:55:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:39.215851 | orchestrator | 2025-05-04 01:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:39.216006 | orchestrator | 2025-05-04 01:55:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:42.270816 | orchestrator | 2025-05-04 01:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:42.270974 | orchestrator | 2025-05-04 01:55:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:45.322482 | orchestrator | 2025-05-04 01:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:45.322601 | orchestrator | 2025-05-04 01:55:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:48.374166 | orchestrator | 2025-05-04 01:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:48.374421 | orchestrator | 2025-05-04 01:55:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:51.430716 | orchestrator | 2025-05-04 01:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:51.430876 | orchestrator | 2025-05-04 01:55:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:54.484233 | orchestrator | 2025-05-04 01:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:54.484455 | orchestrator | 2025-05-04 01:55:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:55:57.538782 | orchestrator | 2025-05-04 01:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:55:57.538930 | orchestrator | 2025-05-04 01:55:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:00.588314 | orchestrator | 2025-05-04 01:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:00.588604 | orchestrator | 2025-05-04 01:56:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:03.640046 | orchestrator | 2025-05-04 01:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:03.640189 | orchestrator | 2025-05-04 01:56:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:06.687336 | orchestrator | 2025-05-04 01:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:06.687558 | orchestrator | 2025-05-04 01:56:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:09.735147 | orchestrator | 2025-05-04 01:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:09.735306 | orchestrator | 2025-05-04 01:56:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:12.780398 | orchestrator | 2025-05-04 01:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:12.780586 | orchestrator | 2025-05-04 01:56:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:15.833995 | orchestrator | 2025-05-04 01:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:15.834241 | orchestrator | 2025-05-04 01:56:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:18.887035 | orchestrator | 2025-05-04 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:18.887195 | orchestrator | 2025-05-04 01:56:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:21.939441 | orchestrator | 2025-05-04 01:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:21.939630 | orchestrator | 2025-05-04 01:56:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:24.988293 | orchestrator | 2025-05-04 01:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:24.988506 | orchestrator | 2025-05-04 01:56:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:28.037995 | orchestrator | 2025-05-04 01:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:28.038209 | orchestrator | 2025-05-04 01:56:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:31.085275 | orchestrator | 2025-05-04 01:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:31.085472 | orchestrator | 2025-05-04 01:56:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:34.136276 | orchestrator | 2025-05-04 01:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:34.136438 | orchestrator | 2025-05-04 01:56:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:37.190721 | orchestrator | 2025-05-04 01:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:37.190882 | orchestrator | 2025-05-04 01:56:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:40.238588 | orchestrator | 2025-05-04 01:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:40.238745 | orchestrator | 2025-05-04 01:56:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:43.288975 | orchestrator | 2025-05-04 01:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:43.289127 | orchestrator | 2025-05-04 01:56:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:46.333734 | orchestrator | 2025-05-04 01:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:46.333899 | orchestrator | 2025-05-04 01:56:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:49.380998 | orchestrator | 2025-05-04 01:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:49.381191 | orchestrator | 2025-05-04 01:56:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:52.424356 | orchestrator | 2025-05-04 01:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:52.424528 | orchestrator | 2025-05-04 01:56:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:55.480969 | orchestrator | 2025-05-04 01:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:55.481137 | orchestrator | 2025-05-04 01:56:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:56:58.536221 | orchestrator | 2025-05-04 01:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:56:58.536420 | orchestrator | 2025-05-04 01:56:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:01.586837 | orchestrator | 2025-05-04 01:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:01.587001 | orchestrator | 2025-05-04 01:57:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:04.657077 | orchestrator | 2025-05-04 01:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:04.657246 | orchestrator | 2025-05-04 01:57:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:07.709649 | orchestrator | 2025-05-04 01:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:07.709851 | orchestrator | 2025-05-04 01:57:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:10.767389 | orchestrator | 2025-05-04 01:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:10.767534 | orchestrator | 2025-05-04 01:57:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:13.820264 | orchestrator | 2025-05-04 01:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:13.820497 | orchestrator | 2025-05-04 01:57:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:16.871991 | orchestrator | 2025-05-04 01:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:16.872159 | orchestrator | 2025-05-04 01:57:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:19.927467 | orchestrator | 2025-05-04 01:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:19.927640 | orchestrator | 2025-05-04 01:57:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:22.976992 | orchestrator | 2025-05-04 01:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:22.977158 | orchestrator | 2025-05-04 01:57:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:26.034142 | orchestrator | 2025-05-04 01:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:26.034309 | orchestrator | 2025-05-04 01:57:26 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:29.092122 | orchestrator | 2025-05-04 01:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:29.092279 | orchestrator | 2025-05-04 01:57:29 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:32.143456 | orchestrator | 2025-05-04 01:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:32.143647 | orchestrator | 2025-05-04 01:57:32 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:35.198370 | orchestrator | 2025-05-04 01:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:35.198530 | orchestrator | 2025-05-04 01:57:35 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:38.248383 | orchestrator | 2025-05-04 01:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:38.248543 | orchestrator | 2025-05-04 01:57:38 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:41.298858 | orchestrator | 2025-05-04 01:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:41.299010 | orchestrator | 2025-05-04 01:57:41 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:44.349952 | orchestrator | 2025-05-04 01:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:44.350169 | orchestrator | 2025-05-04 01:57:44 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:47.400636 | orchestrator | 2025-05-04 01:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:47.400787 | orchestrator | 2025-05-04 01:57:47 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:50.459146 | orchestrator | 2025-05-04 01:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:50.459306 | orchestrator | 2025-05-04 01:57:50 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:53.508741 | orchestrator | 2025-05-04 01:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:53.508937 | orchestrator | 2025-05-04 01:57:53 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:56.554393 | orchestrator | 2025-05-04 01:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:56.554551 | orchestrator | 2025-05-04 01:57:56 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:57:59.604805 | orchestrator | 2025-05-04 01:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:57:59.604956 | orchestrator | 2025-05-04 01:57:59 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:02.667599 | orchestrator | 2025-05-04 01:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:02.667777 | orchestrator | 2025-05-04 01:58:02 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:05.725026 | orchestrator | 2025-05-04 01:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:05.725211 | orchestrator | 2025-05-04 01:58:05 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:08.780963 | orchestrator | 2025-05-04 01:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:08.781099 | orchestrator | 2025-05-04 01:58:08 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:11.838366 | orchestrator | 2025-05-04 01:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:11.838519 | orchestrator | 2025-05-04 01:58:11 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:14.903691 | orchestrator | 2025-05-04 01:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:14.903846 | orchestrator | 2025-05-04 01:58:14 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:17.953171 | orchestrator | 2025-05-04 01:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:17.953417 | orchestrator | 2025-05-04 01:58:17 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:21.004811 | orchestrator | 2025-05-04 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:21.004965 | orchestrator | 2025-05-04 01:58:21 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:24.055627 | orchestrator | 2025-05-04 01:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:24.055826 | orchestrator | 2025-05-04 01:58:24 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:24.055920 | orchestrator | 2025-05-04 01:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:27.115821 | orchestrator | 2025-05-04 01:58:27 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:30.165726 | orchestrator | 2025-05-04 01:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:30.165876 | orchestrator | 2025-05-04 01:58:30 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:33.213694 | orchestrator | 2025-05-04 01:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:33.213864 | orchestrator | 2025-05-04 01:58:33 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:36.268183 | orchestrator | 2025-05-04 01:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:36.268408 | orchestrator | 2025-05-04 01:58:36 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:39.323087 | orchestrator | 2025-05-04 01:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:39.323285 | orchestrator | 2025-05-04 01:58:39 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:42.368282 | orchestrator | 2025-05-04 01:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:42.368509 | orchestrator | 2025-05-04 01:58:42 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:45.422749 | orchestrator | 2025-05-04 01:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:45.422915 | orchestrator | 2025-05-04 01:58:45 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:48.472523 | orchestrator | 2025-05-04 01:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:48.472674 | orchestrator | 2025-05-04 01:58:48 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:51.519382 | orchestrator | 2025-05-04 01:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:51.519538 | orchestrator | 2025-05-04 01:58:51 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:54.574452 | orchestrator | 2025-05-04 01:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:54.574607 | orchestrator | 2025-05-04 01:58:54 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:58:57.619659 | orchestrator | 2025-05-04 01:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:58:57.619829 | orchestrator | 2025-05-04 01:58:57 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:00.673166 | orchestrator | 2025-05-04 01:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:00.673339 | orchestrator | 2025-05-04 01:59:00 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:03.731389 | orchestrator | 2025-05-04 01:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:03.731561 | orchestrator | 2025-05-04 01:59:03 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:06.774143 | orchestrator | 2025-05-04 01:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:06.774290 | orchestrator | 2025-05-04 01:59:06 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:09.829055 | orchestrator | 2025-05-04 01:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:09.829229 | orchestrator | 2025-05-04 01:59:09 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:12.876666 | orchestrator | 2025-05-04 01:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:12.876841 | orchestrator | 2025-05-04 01:59:12 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:15.925824 | orchestrator | 2025-05-04 01:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:15.926000 | orchestrator | 2025-05-04 01:59:15 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:18.976414 | orchestrator | 2025-05-04 01:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:18.976570 | orchestrator | 2025-05-04 01:59:18 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:22.029752 | orchestrator | 2025-05-04 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:22.029900 | orchestrator | 2025-05-04 01:59:22 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:25.068155 | orchestrator | 2025-05-04 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:25.068333 | orchestrator | 2025-05-04 01:59:25 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:28.117123 | orchestrator | 2025-05-04 01:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:28.117277 | orchestrator | 2025-05-04 01:59:28 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:31.168363 | orchestrator | 2025-05-04 01:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:31.168559 | orchestrator | 2025-05-04 01:59:31 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:34.217085 | orchestrator | 2025-05-04 01:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:34.217241 | orchestrator | 2025-05-04 01:59:34 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:37.258910 | orchestrator | 2025-05-04 01:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:37.259067 | orchestrator | 2025-05-04 01:59:37 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:40.310675 | orchestrator | 2025-05-04 01:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:40.310799 | orchestrator | 2025-05-04 01:59:40 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:43.365072 | orchestrator | 2025-05-04 01:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:43.365233 | orchestrator | 2025-05-04 01:59:43 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:46.408547 | orchestrator | 2025-05-04 01:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:46.408704 | orchestrator | 2025-05-04 01:59:46 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:49.459230 | orchestrator | 2025-05-04 01:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:49.459389 | orchestrator | 2025-05-04 01:59:49 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:52.506564 | orchestrator | 2025-05-04 01:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:52.506710 | orchestrator | 2025-05-04 01:59:52 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:55.555394 | orchestrator | 2025-05-04 01:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:55.555606 | orchestrator | 2025-05-04 01:59:55 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 01:59:58.603295 | orchestrator | 2025-05-04 01:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-04 01:59:58.603448 | orchestrator | 2025-05-04 01:59:58 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:01.651980 | orchestrator | 2025-05-04 01:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:01.652136 | orchestrator | 2025-05-04 02:00:01 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:04.707879 | orchestrator | 2025-05-04 02:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:04.708045 | orchestrator | 2025-05-04 02:00:04 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:07.754765 | orchestrator | 2025-05-04 02:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:07.754942 | orchestrator | 2025-05-04 02:00:07 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:10.797408 | orchestrator | 2025-05-04 02:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:10.797638 | orchestrator | 2025-05-04 02:00:10 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:13.840386 | orchestrator | 2025-05-04 02:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:13.840615 | orchestrator | 2025-05-04 02:00:13 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:16.896115 | orchestrator | 2025-05-04 02:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:16.896284 | orchestrator | 2025-05-04 02:00:16 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:19.951729 | orchestrator | 2025-05-04 02:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-04 02:00:19.951871 | orchestrator | 2025-05-04 02:00:19 | INFO  | Task 06b7ef9d-42d9-4dab-ab51-d0c173886a5a is in state STARTED 2025-05-04 02:00:22.510251 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-04 02:00:22.515443 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-04 02:00:23.262490 | 2025-05-04 02:00:23.262745 | PLAY [Post output play] 2025-05-04 02:00:23.293114 | 2025-05-04 02:00:23.293247 | LOOP [stage-output : Register sources] 2025-05-04 02:00:23.378705 | 2025-05-04 02:00:23.379008 | TASK [stage-output : Check sudo] 2025-05-04 02:00:24.090100 | orchestrator | sudo: a password is required 2025-05-04 02:00:24.424468 | orchestrator | ok: Runtime: 0:00:00.015669 2025-05-04 02:00:24.443222 | 2025-05-04 02:00:24.443389 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-04 02:00:24.482165 | 2025-05-04 02:00:24.482386 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-04 02:00:24.566454 | orchestrator | ok 2025-05-04 02:00:24.579034 | 2025-05-04 02:00:24.579184 | LOOP [stage-output : Ensure target folders exist] 2025-05-04 02:00:25.047909 | orchestrator | ok: "docs" 2025-05-04 02:00:25.048330 | 2025-05-04 02:00:25.290914 | orchestrator | ok: "artifacts" 2025-05-04 02:00:25.542946 | orchestrator | ok: "logs" 2025-05-04 02:00:25.561740 | 2025-05-04 02:00:25.561909 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-04 02:00:25.601214 | 2025-05-04 02:00:25.601466 | TASK [stage-output : Make all log files readable] 2025-05-04 02:00:25.896750 | orchestrator | ok 2025-05-04 02:00:25.907137 | 2025-05-04 02:00:25.907270 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-04 02:00:25.953028 | orchestrator | skipping: Conditional result was False 2025-05-04 02:00:25.970576 | 2025-05-04 02:00:25.970732 | TASK [stage-output : Discover log files for compression] 2025-05-04 02:00:25.996681 | orchestrator | skipping: Conditional result was False 2025-05-04 02:00:26.014271 | 2025-05-04 02:00:26.014407 | LOOP [stage-output : Archive everything from logs] 2025-05-04 02:00:26.084892 | 2025-05-04 02:00:26.085069 | PLAY [Post cleanup play] 2025-05-04 02:00:26.109118 | 2025-05-04 02:00:26.109247 | TASK [Set cloud fact (Zuul deployment)] 2025-05-04 02:00:26.177665 | orchestrator | ok 2025-05-04 02:00:26.190349 | 2025-05-04 02:00:26.190491 | TASK [Set cloud fact (local deployment)] 2025-05-04 02:00:26.225824 | orchestrator | skipping: Conditional result was False 2025-05-04 02:00:26.244205 | 2025-05-04 02:00:26.244344 | TASK [Clean the cloud environment] 2025-05-04 02:00:26.963440 | orchestrator | 2025-05-04 02:00:26 - clean up servers 2025-05-04 02:00:27.883757 | orchestrator | 2025-05-04 02:00:27 - testbed-manager 2025-05-04 02:00:27.975429 | orchestrator | 2025-05-04 02:00:27 - testbed-node-3 2025-05-04 02:00:28.090699 | orchestrator | 2025-05-04 02:00:28 - testbed-node-1 2025-05-04 02:00:28.206767 | orchestrator | 2025-05-04 02:00:28 - testbed-node-4 2025-05-04 02:00:28.321035 | orchestrator | 2025-05-04 02:00:28 - testbed-node-0 2025-05-04 02:00:28.447649 | orchestrator | 2025-05-04 02:00:28 - testbed-node-2 2025-05-04 02:00:28.582282 | orchestrator | 2025-05-04 02:00:28 - testbed-node-5 2025-05-04 02:00:28.698776 | orchestrator | 2025-05-04 02:00:28 - clean up keypairs 2025-05-04 02:00:28.717873 | orchestrator | 2025-05-04 02:00:28 - testbed 2025-05-04 02:00:28.747229 | orchestrator | 2025-05-04 02:00:28 - wait for servers to be gone 2025-05-04 02:00:35.651400 | orchestrator | 2025-05-04 02:00:35 - clean up ports 2025-05-04 02:00:35.862905 | orchestrator | 2025-05-04 02:00:35 - 334c8cc6-831e-4f07-92f7-d2b39358fa90 2025-05-04 02:00:36.062164 | orchestrator | 2025-05-04 02:00:36 - 4967464c-ade0-4d86-ba70-ba21510038e5 2025-05-04 02:00:36.307314 | orchestrator | 2025-05-04 02:00:36 - 7ddda34a-23c0-46f0-a1f1-441e5fafeb58 2025-05-04 02:00:36.538658 | orchestrator | 2025-05-04 02:00:36 - bd28d316-7724-4439-949f-1b91493f66a5 2025-05-04 02:00:36.901713 | orchestrator | 2025-05-04 02:00:36 - c3368b08-b762-4dfe-a4ba-a6901fecdd53 2025-05-04 02:00:37.107371 | orchestrator | 2025-05-04 02:00:37 - e75119a0-6210-4f15-a7f6-c75b278489bf 2025-05-04 02:00:37.306173 | orchestrator | 2025-05-04 02:00:37 - fa71e89c-cafb-4947-bd51-8d1b5d1dc7dc 2025-05-04 02:00:37.504007 | orchestrator | 2025-05-04 02:00:37 - clean up volumes 2025-05-04 02:00:37.662370 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-3-node-base 2025-05-04 02:00:37.712879 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-5-node-base 2025-05-04 02:00:37.756593 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-1-node-base 2025-05-04 02:00:37.802076 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-manager-base 2025-05-04 02:00:37.841866 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-2-node-base 2025-05-04 02:00:37.885600 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-0-node-base 2025-05-04 02:00:37.933449 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-12-node-0 2025-05-04 02:00:37.971862 | orchestrator | 2025-05-04 02:00:37 - testbed-volume-9-node-3 2025-05-04 02:00:38.016189 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-7-node-1 2025-05-04 02:00:38.066547 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-0-node-0 2025-05-04 02:00:38.120495 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-1-node-1 2025-05-04 02:00:38.167898 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-5-node-5 2025-05-04 02:00:38.217122 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-4-node-4 2025-05-04 02:00:38.256695 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-4-node-base 2025-05-04 02:00:38.300820 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-17-node-5 2025-05-04 02:00:38.341406 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-8-node-2 2025-05-04 02:00:38.387702 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-6-node-0 2025-05-04 02:00:38.430835 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-16-node-4 2025-05-04 02:00:38.474973 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-15-node-3 2025-05-04 02:00:38.513369 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-13-node-1 2025-05-04 02:00:38.556597 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-3-node-3 2025-05-04 02:00:38.597197 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-10-node-4 2025-05-04 02:00:38.642390 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-11-node-5 2025-05-04 02:00:38.682749 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-2-node-2 2025-05-04 02:00:38.723517 | orchestrator | 2025-05-04 02:00:38 - testbed-volume-14-node-2 2025-05-04 02:00:38.763395 | orchestrator | 2025-05-04 02:00:38 - disconnect routers 2025-05-04 02:00:38.884898 | orchestrator | 2025-05-04 02:00:38 - testbed 2025-05-04 02:00:40.315363 | orchestrator | 2025-05-04 02:00:40 - clean up subnets 2025-05-04 02:00:40.365408 | orchestrator | 2025-05-04 02:00:40 - subnet-testbed-management 2025-05-04 02:00:40.498725 | orchestrator | 2025-05-04 02:00:40 - clean up networks 2025-05-04 02:00:40.670957 | orchestrator | 2025-05-04 02:00:40 - net-testbed-management 2025-05-04 02:00:41.009403 | orchestrator | 2025-05-04 02:00:41 - clean up security groups 2025-05-04 02:00:41.043320 | orchestrator | 2025-05-04 02:00:41 - testbed-node 2025-05-04 02:00:41.125521 | orchestrator | 2025-05-04 02:00:41 - testbed-management 2025-05-04 02:00:41.239281 | orchestrator | 2025-05-04 02:00:41 - clean up floating ips 2025-05-04 02:00:41.270729 | orchestrator | 2025-05-04 02:00:41 - 81.163.193.169 2025-05-04 02:00:41.650373 | orchestrator | 2025-05-04 02:00:41 - clean up routers 2025-05-04 02:00:41.710306 | orchestrator | 2025-05-04 02:00:41 - testbed 2025-05-04 02:00:42.806001 | orchestrator | changed 2025-05-04 02:00:42.855348 | 2025-05-04 02:00:42.855452 | PLAY RECAP 2025-05-04 02:00:42.855521 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-04 02:00:42.855549 | 2025-05-04 02:00:42.983640 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-04 02:00:42.991772 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-04 02:00:43.703921 | 2025-05-04 02:00:43.704118 | PLAY [Base post-fetch] 2025-05-04 02:00:43.734329 | 2025-05-04 02:00:43.734469 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-04 02:00:43.800974 | orchestrator | skipping: Conditional result was False 2025-05-04 02:00:43.809332 | 2025-05-04 02:00:43.809475 | TASK [fetch-output : Set log path for single node] 2025-05-04 02:00:43.871298 | orchestrator | ok 2025-05-04 02:00:43.881166 | 2025-05-04 02:00:43.881300 | LOOP [fetch-output : Ensure local output dirs] 2025-05-04 02:00:44.366450 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/work/logs" 2025-05-04 02:00:44.635201 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/work/artifacts" 2025-05-04 02:00:44.911659 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dced2a157bc241aea55b02a5a6515176/work/docs" 2025-05-04 02:00:44.935275 | 2025-05-04 02:00:44.935430 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-04 02:00:45.778032 | orchestrator | changed: .d..t...... ./ 2025-05-04 02:00:45.778623 | orchestrator | changed: All items complete 2025-05-04 02:00:45.778695 | 2025-05-04 02:00:46.394529 | orchestrator | changed: .d..t...... ./ 2025-05-04 02:00:47.008843 | orchestrator | changed: .d..t...... ./ 2025-05-04 02:00:47.039964 | 2025-05-04 02:00:47.040104 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-04 02:00:47.083137 | orchestrator | skipping: Conditional result was False 2025-05-04 02:00:47.091249 | orchestrator | skipping: Conditional result was False 2025-05-04 02:00:47.145448 | 2025-05-04 02:00:47.145563 | PLAY RECAP 2025-05-04 02:00:47.145619 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-04 02:00:47.145646 | 2025-05-04 02:00:47.260143 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-04 02:00:47.268412 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-04 02:00:47.980929 | 2025-05-04 02:00:47.981091 | PLAY [Base post] 2025-05-04 02:00:48.009629 | 2025-05-04 02:00:48.009786 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-04 02:00:48.860094 | orchestrator | changed 2025-05-04 02:00:48.899030 | 2025-05-04 02:00:48.899157 | PLAY RECAP 2025-05-04 02:00:48.899224 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-04 02:00:48.899289 | 2025-05-04 02:00:49.010699 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-04 02:00:49.018048 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-04 02:00:49.776731 | 2025-05-04 02:00:49.776899 | PLAY [Base post-logs] 2025-05-04 02:00:49.792944 | 2025-05-04 02:00:49.793076 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-04 02:00:50.243107 | localhost | changed 2025-05-04 02:00:50.249970 | 2025-05-04 02:00:50.250156 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-04 02:00:50.291750 | localhost | ok 2025-05-04 02:00:50.302384 | 2025-05-04 02:00:50.302558 | TASK [Set zuul-log-path fact] 2025-05-04 02:00:50.334604 | localhost | ok 2025-05-04 02:00:50.355844 | 2025-05-04 02:00:50.355958 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-04 02:00:50.397554 | localhost | ok 2025-05-04 02:00:50.408715 | 2025-05-04 02:00:50.408865 | TASK [upload-logs : Create log directories] 2025-05-04 02:00:50.920641 | localhost | changed 2025-05-04 02:00:50.926180 | 2025-05-04 02:00:50.926301 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-04 02:00:51.445815 | localhost -> localhost | ok: Runtime: 0:00:00.008837 2025-05-04 02:00:51.456093 | 2025-05-04 02:00:51.456268 | TASK [upload-logs : Upload logs to log server] 2025-05-04 02:00:52.062667 | localhost | Output suppressed because no_log was given 2025-05-04 02:00:52.068927 | 2025-05-04 02:00:52.069114 | LOOP [upload-logs : Compress console log and json output] 2025-05-04 02:00:52.134661 | localhost | skipping: Conditional result was False 2025-05-04 02:00:52.152122 | localhost | skipping: Conditional result was False 2025-05-04 02:00:52.165981 | 2025-05-04 02:00:52.166160 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-04 02:00:52.230689 | localhost | skipping: Conditional result was False 2025-05-04 02:00:52.231394 | 2025-05-04 02:00:52.243082 | localhost | skipping: Conditional result was False 2025-05-04 02:00:52.251694 | 2025-05-04 02:00:52.251882 | LOOP [upload-logs : Upload console log and json output]